![]() HIP SURGICAL NAVIGATION USING FLUOROSCOPY AND TRACKING SENSORS
专利摘要:
A method for tracking the movement of a body part, the method comprising: (a) collecting movement data from a repositioned body part within a range of motion, the body part having a motion sensor mounted on it; (b) collect a plurality of radiographic images taken of the body part while the body part is in different positions within the range of motion, the plurality of radiographic images having the body part and the motion sensor within a field of view ; and (c) building a virtual three-dimensional model of the body part from the plurality of radiographic images using an identifiable motion sensor structure within at least two of the plurality of radiographic images to calibrate the radiographic images. 公开号:BR112019025107A2 申请号:R112019025107-3 申请日:2018-06-19 公开日:2020-06-23 发明作者:R. Mahfouz Mohamed;Mohamed R. Mahfouz 申请人:Mohamed R. Mahfouz; IPC主号:
专利说明:
[001] [001] This application claims the benefit of US Provisional Patent Application Serial No. 62 / 521,582, entitled “Surgical Navigation of the Hip using Fluoroscopy and Tracking Sensors”, filed on June 19, 2017, and claims the benefit of US Provisional Patent Application Serial No. 62 / 575,905, entitled "Surgical Navigation of the Hip using Fluoroscopy and Tracking Sensors", filed on October 23, 2017, and claims to benefit from US Provisional Patent Application No. Series 62 / 617,383, entitled "Surgical Navigation of the Hip using Fluoroscopy and Tracking Sensors", filed on January 15, 2018, the disclosure of which is incorporated herein by reference. INTRODUCTION TO THE INVENTION [002] [002] Computed tomography (CT) or magnetic resonance imaging (MRI) is generally considered the gold standard for joint imaging, specifically in applications that require a virtual anatomical model. The images can be used by the segmentation software to perform three-dimensional (3D) reconstruction, the output of which is a surface model of the patient's joint. These models can include bone, cartilage, other soft tissues or any combination. Therefore, these 3D models are often used in modern navigation and surgical guidance systems for total joint replacement surgery. However, creating these models often takes time, resulting in increased costs and [003] [003] A difficult process related to the use of navigation systems and surgical guidance in the intraoperative period is the recording of the location and orientation of the patient's joints in the navigation system. This is usually done by recording the location of bone reference points in the patient under the supervision of the guidance system, where the positions and orientations of the joints can be calibrated for the system. Traditionally, this is done manually in the operating room, is time consuming and is potentially inaccurate. [004] [004] A technique has been developed for the use of a patient-specific instrument for registration, where the instrument is made to fit the patient's bone in a unique way. The instrument can be additively manufactured and sterilized for the patient. Several challenges are present in this process. The first is related to manufacturing. Some patients may not have the ideal bone geometry that can "block" the patient-specific instrument exclusively, which can introduce registration errors. In addition, due to the nature of additive manufacturing, the material is generally porous, which can affect the tolerance of the instrument, depending on the humidity level. Another issue is the high cost and lead time required to manufacture these instruments. Often, an engineer is required to perform segmentation, analyze the joint geometry to create the instrument's locking mechanism on the patient's joint, which can take weeks to complete, depending on the volume. As part of this disclosure, a new recording technique is presented with medical imaging to avoid the need to manufacture any additional devices. [005] [005] By coupling intraoperative radiography imaging to an inertial tracking system, the patient can be registered in the operating room without the burden of manufacturing the patient-specific instrument or manually identifying the reference points. [006] [006] It is a first aspect of the present invention to provide a method of tracking movement of a body part, the method comprising: (a) collecting motion data from a repositioned body part within a range of motion, the body part having a motion sensor mounted on it; (b) collect a plurality of radiographic images taken of the body part while the body part is in different positions within the range of motion, the plurality of radiographic images having the body part and the motion sensor within a field of view ; and (c) building a virtual three-dimensional model of the body part from the plurality of radiographic images using an identifiable motion sensor structure within at least two of the plurality of radiographic images to calibrate the radiographic images. [007] [007] In a more detailed embodiment of the first aspect, the motion sensor comprises an inertial measurement unit. In yet another more detailed embodiment, the inertial measurement unit comprises a plurality of accelerometers, a plurality of gyroscopes and a plurality of magnetometers. In an additionally detailed mode, the motion sensor is mounted rigidly on the body part. In an even more detailed modality, the motion sensor is mounted outside an epidermis, covering at least part of the body part. In a more detailed mode, the motion sensor is rigidly mounted on the body part. In a more detailed embodiment, the structure of the motion sensor comprises at least one of a resistor, a chip, a capacitor, a circuit board and an electrical conductor. In another more detailed modality, the radiographic image comprises an X-ray. In yet another more detailed modality, the radiographic image comprises a fluoroscopic image. In yet another more detailed modality, the calibration of radiographic images is performed automatically. [008] [008] In yet another more detailed modality of the first aspect, the automatic calibration of radiographic images is performed by a computer running a software program. In yet another more detailed embodiment, the method also includes collecting data from the motion sensor that can be used to determine at least one of the motion sensor's position and rotation as a function of time. In an additionally detailed mode, the data collected from the motion sensor is collected wirelessly. In an even more detailed mode, the data collected from the motion sensor is collected from a wire connected to the motion sensor. In a more detailed mode, the data collected from the motion sensor is collected by at least one phone, a computer, a tablet and a portable memory. In a more detailed modality, the method also includes registering in the three-dimensional space the motion sensor for the virtual three-dimensional model of the body part, and correlating data collected from the motion sensor as a function of the position of the body part to create a virtual dynamic model of the body part that is repositionable to reflect the actual positions of the body part when repositioned within the range of motion. In another more detailed modality, the method also includes building a virtual three-dimensional model of the motion sensor using the plurality of radiographic images. In yet another more detailed modality, the virtual three-dimensional model of the motion sensor is integrated with the virtual three-dimensional model of the body part to create a combined three-dimensional virtual model. In yet another more detailed modality, the method also includes correlating data collected from the motion sensor as a function of the position of the body part to provide dynamic movement to the combined three-dimensional virtual model. [009] [009] In a more detailed modality of the first aspect, collecting motion data includes recording at least one of changes in the position and rotation of the motion sensor as a function of time. In yet another more detailed mode, collecting motion data includes recording changes in the acceleration of the motion sensor as a function of time. In an additionally detailed modality, the method also includes displaying the virtual three-dimensional model of the body part to reflect changes in the position of the real body part in real time. In an even more detailed modality, the movement data collected is hourly. [0010] [0010] It is a second aspect of the present invention to provide a system for tracking the movement of a body part, the system comprising: (a) a motion sensor; (b) a processor configured to be communicatively coupled to the motion sensor, the processor communicatively coupled to a plurality of modules, the modules comprising: (i) a data receiving module configured to record motion data generated by the motion sensor, at least one of the data reception module and the motion sensor horodating the motion data generated by the motion sensor; (ii) a radiographic image processing module configured to identify a common resource visible through a plurality of radiographic images in order to calibrate the plurality of radiographic images; and (iii) a three-dimensional model module configured to process a plurality of radiographic images and create a virtual three-dimensional model of a visible objective in at least part of the plurality of radiographic images. [0011] [0011] In a more detailed embodiment of the second aspect, the motion sensor includes an inertial measurement unit. In yet another more detailed embodiment, the motion sensor includes a plurality of accelerometers. In an additionally detailed embodiment, the motion sensor includes a plurality of magnetometers. In an even more detailed modality, the motion sensor includes a plurality of gyroscopes. In a more detailed modality, the system also includes a display connected communicatively to the processor and operative to display the virtual three-dimensional model. In a more detailed mode, the system also includes a radiographic image capture machine. [0012] [0012] It is a third aspect of the present invention to provide a method for providing surgical navigation, the method comprising: (a) obtaining a plurality of radiographic images taken intraoperatively from multiple vantage angles that include a body part and at least a target image; (b) registering the body part during the operation in a navigation system; (c) calculate at least one of an orientation and a position of the body part in a three-dimensional coordinate system used by the navigation system; and (d) displaying a virtual model of a tangible item comprising at least part of the body, a surgical instrument and an orthopedic implant, where displaying the virtual model includes changing in real time at least one of a position and orientation of the virtual model to agree to a change in at least one of the position and orientation of the tangible item. [0013] [0013] In a more detailed modality of the third aspect, the virtual model of the tangible item comprises a three-dimensional model associated with the navigation system, and the registration step includes registering a two-dimensional image of the body part in the three-dimensional model. [0014] [0014] In yet another more detailed modality of the third aspect, the method also includes obtaining a plurality of radiographic images taken preoperatively from multiple vantage angles that include the body part, and creating a virtual three-dimensional model of the part of the body from the plurality of radiographic images. In yet another more detailed modality, the method also includes calibrating the plurality of radiographic images taken in the preoperative period before the creation of the virtual three-dimensional model. In an additionally detailed modality, the method also includes planning a surgical procedure using the virtual three-dimensional model. In an even more detailed modality, the method also includes collecting movement data from the repositioned body part within a range of motion, the body part having a motion sensor mounted on it. In a more detailed embodiment, the motion sensor comprises an inertial measurement unit. In a more detailed embodiment, the inertial measurement unit comprises a plurality of accelerometers, a plurality of gyroscopes and a plurality of magnetometers. In another more detailed embodiment, the motion sensor is mounted rigidly on the body part. In yet another more detailed embodiment, the motion sensor is mounted outside an epidermis, covering at least part of the body part. In yet another more detailed embodiment, the motion sensor is rigidly mounted on the body part. [0015] [0015] In a more detailed embodiment of the third aspect, the plurality of radiographic images comprises a plurality of X-ray images. In yet another more detailed embodiment, the plurality of radiographic images comprises a plurality of fluoroscopic images. In an additionally detailed modality, the method also includes calibrating the plurality of radiographic images obtained during the operation. In an even more detailed modality, the calibration of the plurality of radiographic images is performed automatically by a computer running a software program. In a more detailed embodiment, the method also includes collecting data from the motion sensor that can be used to determine at least one of the motion sensor's position and rotation as a function of time. In a more detailed mode, the data collected from the motion sensor is collected wirelessly. In another more detailed modality, the data collected from the motion sensor is collected from a wire connected to the motion sensor. [0016] [0016] In yet another more detailed modality of the third aspect, the data collected from the motion sensor are collected by at least one among a phone, a computer, a tablet and a portable memory. In yet another more detailed modality, the method also includes registering in the three-dimensional space the motion sensor for a virtual three-dimensional model of the body part, and correlating data collected from the motion sensor as a function of the position of the body part to create a virtual dynamic model of the body part that is repositionable to reflect the actual positions of the body part when repositioned within a range of motion. In an additionally detailed modality, the method also includes building a virtual three-dimensional model of the motion sensor using the plurality of radiographic images. In an even more detailed modality, the method also includes building a virtual three-dimensional model of the motion sensor using the plurality of radiographic images. In a more detailed modality, the virtual three-dimensional model of the motion sensor is integrated with the virtual three-dimensional model of the body part to create a combined three-dimensional virtual model. In a more detailed modality, the method also includes correlating data collected from the motion sensor as a function of the position of the body part to provide dynamic movement to the combined three-dimensional virtual model. In another, more detailed mode, collecting motion data includes recording at least one of changes in the position and rotation of the motion sensor as a function of time. In yet another more detailed mode, collecting motion data includes recording changes in the acceleration of the motion sensor as a function of time. In yet another more detailed modality, the movement data collected is hourly. BRIEF DESCRIPTION OF THE DRAWINGS [0017] [0017] Figure 1A is an overview of an exemplary system of exemplary surgical navigation of the hip using X-rays or preoperative fluoroscopy, according to the present disclosure. [0018] [0018] Figure 1B is an overview of an exemplary system of exemplary surgical navigation of the hip using X-rays or intraoperative fluoroscopy, according to the present disclosure. [0019] [0019] Figure 2 is an overview of an exemplary system of preoperative image-based planning with navigation system based on intraoperative real-time image using tracking sensors in accordance with the present disclosure. [0020] [0020] Figure 3 is a variation without an exemplary sensor of preoperative image-based planning plus navigation system based on intraoperative real-time image, according to the present disclosure. [0021] [0021] Figure 4 is an overview of an exemplary intraoperative image-based planning system with navigation system based on real-time intraoperative image, using tracking sensors in accordance with the present disclosure. [0022] [0022] Figure 5 is an exemplary configuration for navigation using X-ray images and no real-time tracking system, without reconstruction, according to the present disclosure. [0023] [0023] Figure 6 is an exemplary process flow representing a non-rigid record for creating patient-specific three-dimensional proximal pelvis and femur models from X-rays, according to the present disclosure. [0024] [0024] Figure 7 is an exemplary representation of resource detection in X-ray images of the pelvis taken in different views according to the present disclosure. [0025] [0025] Figure 8 is an exemplary representation showing the resource correspondence between multiple 2D images, taken in different views, using only the grid-based movement statistics approach in pelvis X-ray images, according to the present disclosure . [0026] [0026] Figure 9 is an exemplary representation showing the resource match between multiple 2D images, taken in different views, using the combination of grid-based motion statistics and vector field consensus approaches in pelvis X-ray images , in accordance with this disclosure. [0027] [0027] Figure 10 is an exemplary representation of DRR images of Judet Direita's view, AP view and Judet Esquerda's view of an exemplary pelvis model according to the present disclosure. [0028] [0028] Figure 11 is an exemplary X-ray reconstruction, according to the present disclosure, showing highlighted bone limits for the transformation. [0029] [0029] Figure 12 is an exemplary representation of multiple X-rays of the pelvis, taken from the perspective of the right Judet view, the AP view and the left Judet view. [0030] [0030] Figure 13 is an exemplary representation showing X-ray images with bone and attached sensor. [0031] [0031] Figure 14 is an exemplary representation showing sensors attached to the patient prior to imaging and the resulting X-ray images containing both patient and sensor joint information. [0032] [0032] Figure 15 is an exemplary process flow of a process for reconstructing the patient's anatomy and marking the fluoroscopy reference point, according to the present disclosure. [0033] [0033] Figure 16 is an exemplary representation showing fluoroscopic images of the geometric calibration grid, before removing the distortion (left) and after removing the distortion (right), according to the present disclosure. [0034] [0034] Figure 17 is a flow of optimization process of pose parameter and exemplary form, according to the present disclosure. [0035] [0035] Figure 18 is an exemplary representation showing the relationship between input-space distance and resource-space distances as part of this disclosure. [0036] [0036] Figure 19 is an exemplary representation of screen shots showing the automatic calculation of anatomical landmarks in 3D virtual models of the pelvis and femur. [0037] [0037] Figure 20 is an exemplary representation of screen shots showing the planned placement of an acetabular cup and a femoral stem in virtual 3D models of the pelvis and femur. [0038] [0038] Figure 21 is a screen capture representing a generic model of an acetabular cup in relation to a virtual 3D model of the pelvis. [0039] [0039] Figure 22 is a partial screen capture that describes the generic modeling of stem and femoral measurements using the geometry of the intramedullary canal to assess the different implant locations that are used to calculate the ideal implant diameter, angle neck and head dislocation. [0040] [0040] Figure 23 is an exemplary screen capture of software that demonstrates dynamic planning information including ligament tensions during different stages of activities, and the contact map of the implants, according to the present disclosure. [0041] [0041] Figure 24 is an exemplary process flowchart that describes the two-dimensional to three-dimensional record in accordance with the present disclosure. [0042] [0042] Figure 25A is a screen capture of an exemplary software user interface for registering a three-dimensional model in a two-dimensional X-ray or fluoroscopic image, where the user selects a set of landmarks in the image that correspond to the anatomical landmarks, in accordance with this disclosure. [0043] [0043] Figure 25B is a screen capture of an exemplary software user interface for registering a three-dimensional model in a two-dimensional X-ray or fluoroscopic image, where landmarks are used in a first-pass optimization that generates pose that results in the shortest distance between projected reference points and selected reference points, according to the present disclosure. [0044] [0044] Figure 25C is a screen capture of an exemplary software user interface for registering a three-dimensional model in a two-dimensional X-ray or fluoroscopic image, where further optimization (initialized in the outpost pose of the reference point optimization) , which minimizes a cost function based on a projected image of the 3D model and the 2D image) results in a final pose from the three-dimensional model to the two-dimensional image. [0045] [0045] Figure 26 is an image of an exemplary radiograph of a hip / pelvis. [0046] [0046] Figure 27 is a three-dimensional bone model registered for the image of Figure 26 according to the present disclosure. [0047] [0047] Figure 28 is an exemplary representation of configuration A for intraoperative imaging in which each bone of the joint (hip joint in this case) is attached to a tracking device by means of a fixation device, where each tracking device it may include four or more radiopaque features attached to or incorporated into it. [0048] [0048] Figure 29 is an exemplary representation of configuration B for intraoperative imaging, where each joint bone (hip joint in this case) is attached to a tracking device through a fixation device (in this case, bone pins) and an additional tracking device accessory can be attached between the tracking device and the fixture, where the accessory and tracking device each have four or more radiopaque features attached to or incorporated into it. [0049] [0049] Figure 30 is an exemplary radiographic image (for example, fluoroscopy) showing a pelvis, a tracking device, a fixation device and four radiopaque features incorporated in the tracking device. [0050] [0050] Figure 31 is an exemplary assembly showing a tracking / sensor device, a fixation device (reference set), and an image / record target with several radiopaque features incorporated in it being fixed to the patient's pelvis, according to with this disclosure. [0051] [0051] Figure 32 shows examples of X-ray images taken with (A) an image target mounted on a pelvis and (B) showing a virtual model of the image target superimposed on the X-ray image, according to the present disclosure. [0052] [0052] Figure 33 shows (a) an image target, for registering the tracking device / sensor, rigidly attached to the anatomy by means of an X-ray image, and (b) shows a virtual 3D model of the anatomy and the tracking device / sensor mounted on it that corresponds to the X-ray image. [0053] [0053] Figure 34 represents an exemplary image target according to the present disclosure. [0054] [0054] Figure 35 represents multiple views of the image target in Figure 34. [0055] [0055] Figure 36 represents multiple views of an exemplary reference set comprising the image target of Figure 34 mounted on a reference piece and a tracking / sensor device. [0056] [0056] Figure 37 represents the exemplary reference set of Figure 36 with and without bone pins mounted on them. [0057] [0057] Figure 38 represents the exemplary reference set of Figure 36 mounted on a virtual model of a pelvis (left) and mounted on a real pelvis using an X-ray image (right) in an anteroposterior view. [0058] [0058] Figure 39 represents the exemplary reference set of Figure 36 mounted on a virtual model of a pelvis (left) and mounted on a real pelvis using an X-ray image (right) in a Judet view. [0059] [0059] Figure 40 is an exemplary 3D virtual joint model of the hip, showing several exemplary reference sets of Figure 36 mounted respectively on the pelvis and femur that can allow 3D-2D registration and sensor navigation according to present disclosure. [0060] [0060] Figure 41 is an exemplary representation showing that the system establishes a relationship between the real-world coordinate of the tracking device q0 and the image coordinate through the image target using the fiducial markers on the radiographic images q1. [0061] [0061] Figure 42 illustrates an exemplary graphical view of a registration process in accordance with the present disclosure. [0062] [0062] Figure 43 illustrates an exemplary process flow for performing an exemplary registration process in accordance with the present disclosure. [0063] [0063] Figure 44A illustrates a PAM reference set mounted on the patient's anatomy in a predetermined position and orientation that can be used to record image and model data, in addition to facilitating surgical navigation in real time. [0064] [0064] Figure 44 B illustrates an exemplary calibration matrix between a real-time navigation (TN) coordinate system of the system and one of the calibration target (TC) coordinate system. [0065] [0065] Figure 45 is a graphical representation of a radiographic imaging system that can be used in a surgical procedure to check the position of placing an implant according to the present disclosure. [0066] [0066] Figure 46 is a screen capture of an exemplary surgical orientation display according to the present disclosure, which describes a virtual 3D model of the patient's anatomy (in this case, the hip) and the intended orientation that a surgeon should proceed to place the acetabular cup implant consistent with a preoperative plane, where deviations from the intended orientation are shown in a target illustration to bring any deviation back to the intended orientation. [0067] [0067] Figure 47 is a representation showing that, once implant information is known to the surgical guidance software, you can use the 3D CAD model of the same implants to perform 3D models for 2D image registration, so that registering 3D-to-2D image is completed, orientation metrics such as combined anteversion and abduction angles can be determined based on the relative orientation difference of 3D to 2D models. [0068] [0068] Figure 48 is a graphical representation that reflects that spatial measurement, such as leg length, can be measured from registered 3D models to obtain 3D measurements, compared to direct 2D measurement on the radiographic image, which can eliminate the ambiguity of clinical measurements from a 2D flat image. [0069] [0069] Figure 49 is an exemplary flowchart that describes a process for detecting the orientation of the cup and stem without using a CAD model of orthopedic implant. [0070] [0070] Figure 50 is an exemplary radiographic image showing a view of RPO Judet of an exemplary modality in accordance with the present disclosure showing the placement of the image target in relation to the patient's anatomy (pelvis). [0071] [0071] Figure 51A is an exemplary radiographic image showing an AP view of an exemplary modality, in accordance with the present disclosure, showing the placement of the image target in relation to the patient's anatomy (pelvis). [0072] [0072] Figure 51B is an exemplary radiographic image showing an LPO Judet view of an exemplary modality in accordance with the present disclosure showing the placement of the image target in relation to the patient's anatomy (pelvis). [0073] [0073] Figure 52 is an exemplary flow chart for automatic extraction of the stereo calibration matrix according to the present disclosure. [0074] [0074] Figure 53 is a series of radiographic images showing an initial product of a computer program that automatically detects radiopaque spheres visible in radiographic images. [0075] [0075] Figure 54 is a screen capture of an exemplary computer program, according to the present disclosure, showing multiple radiographic images and the result of the automatic pose estimate of the calibration target that appears in the images. [0076] [0076] Figure 55 is an exemplary flowchart that describes an exemplary process for identifying three-dimensional landmarks from the number "n" of two-dimensional stereo images, in accordance with the present disclosure. [0077] [0077] Figure 56 is an exemplary graphical representation that reflects the anatomical detection of the surface edge of a pelvis that was extracted from an AP radiographic image of the pelvis. [0078] [0078] Figure 57 is a graphical interface view of an exemplary user interface for the generation of three-dimensional reference points from two-dimensional stereo images, in accordance with the present disclosure. [0079] [0079] Figure 58 is an exemplary screenshot taken from a user interface, showing multiple views of a virtual 3D model of a pelvis and reflecting the implantation of an orthopedic cup, where the user interface is used for intraoperative surgical planning using 3D landmarks extracted from intraoperative stereo images. [0080] [0080] Figure 59 is a flowchart of an exemplary process of using dynamic data throughout the surgical treatment episode to create patient-specific implants and instruments, place the implant and monitor postoperative performance. [0081] [0081] Figure 60 is a flowchart that describes an exemplary process of creating anatomical information from dynamic image data. [0082] [0082] Figure 61 is a diagram representing the initialization with a hybrid classifier. [0083] [0083] Figure 62 is a diagram representing the variation of the KPCA model applied to a knee joint. [0084] [0084] Figure 63 is a diagram that represents a process for optimization of pose and shape parameters. [0085] [0085] Figure 64 is a diagram that describes the relationship between input-space distance and resource-space distances. [0086] [0086] Figure 65 comprises exemplary images of shoulder and hip reconstruction by X-ray fluoroscopy. [0087] [0087] Figure 66 is a diagram that describes how the geometry space is first decomposed by extracting volumes of interest (VOI) as volumes in which part templates can exist. [0088] [0088] Figure 67 is a diagram that describes how the volume of interest (VoI) is determined by detection. [0089] [0089] Figure 68 is a diagram that represents an alternative to using the deformation in a statistical way to identify resources in the image directly and use the so-called E-Ou tree for identification and shape deformation. [0090] [0090] Figure 69 is a series of computer generated illustrations that break the femoral anatomy into primitive forms. [0091] [0091] Figure 70 is a bone model showing the locations of the ligaments that were extracted from the imaging data. [0092] [0092] Figure 71 is a diagram that represents contact maps of the tibia and femur for a deep knee curve. [0093] [0093] Figure 72 is a diagram that describes how static and fluoroscopic data are used to couple kinematics and morphology. [0094] [0094] Figure 73 is a map of two distal femurs showing relative thickness of cartilage as part of dynamic image capture. [0095] [0095] Figure 74 is a flowchart to estimate cartilage thickness from dynamic data. [0096] [0096] Figure 75 is a probability map of the locations of the ligaments for a distal femur and a proximal tibia. [0097] [0097] Figure 76 is a pair of distal femur models that map the amount of predicted cartilage loss. [0098] [0098] Figure 77 is a process flow chart for creating and using kinematic training networks to identify kinematic patterns. [0099] [0099] Figure 78 is a bone model of the knee joint showing a collapse of the medial side. [00100] [00100] Figure 79 is a bone model of the knee joint showing an estimate of normal joint alignment and change in ligament length. [00101] [00101] Figure 80 is an exemplary process diagram for automatic placement of the femoral stem using distal fixation. [00102] [00102] Figure 81 is an exemplary process diagram for automatic placement of the femoral stem using a pressure fitting and three contacts. DETAILED DESCRIPTION [00103] [00103] The exemplary modalities of the present disclosure are described and illustrated below to cover exemplary surgical navigation methods and corresponding devices and systems. Obviously, it will be evident to those skilled in the art that the modalities discussed below are exemplary in nature and can be reconfigured without departing from the scope and spirit of the present invention. However, for the sake of clarity and precision, the exemplary modalities discussed below may include optional steps, methods and resources that a common specialist should recognize as not being a requirement to fall within the scope of the present invention. [00104] [00104] An exemplary system, as described here, comprises a hybrid system that combines intraoperative fluoroscopy and / or X-rays and tracked instrumentation for real-time navigation. See Figure 1. The system can use one of several variations described in detail below. For each of the configurations below, intraoperative fluoroscopy can be replaced by digital planar radiography. [00105] [00105] Figure 2 describes an exemplary workflow of an exemplary navigation configuration using a real-time tracking system and 2D imaging. The configuration of the outlined navigation system may require preoperative imaging to create a 3D surface of the pelvis and / or femur from one or more radiographic images, which is performed in the preoperative reconstruction module. Associated with 3D models, there are anatomical reference points, defining the anatomical dimensioning and the reference coordinate system (s). The 3D anatomical models can then be inserted into a preoperative surgical planning module, where the acetabular and / or femoral replacement components are virtually positioned. Intraoperatively, a first set of position sensor / reference IMU (sensor, image target and reference piece), including radiopaque resources distributed in a known orientation, is attached to a pelvis or femur of the patient being navigated; a second position sensor / IMU is connected to the tool being tracked. A single X-ray or 2D fluoroscopic image can be acquired and adjusted to correct any image distortion. [00106] [00106] Figure 3 describes an exemplary variation of the previous configuration in which the sensors can be replaced by X-ray imaging or static fluoroscopy containing the anatomy and implant. The position and orientation of the implant are registered in the image by another 3D to 2D registration process, although initialization can be done by aligning the 3D implant in a standard or planned position and orientation in relation to the already registered 3D anatomical model (s) . Presumably, this initialization is close to the final position and, therefore, comprises a sufficient initial estimate. After an implant (or test) component is registered, the orientation and position of the component can be calculated in the 3D coordinate system and reported to the operator on the screen. Adjustments can be made to the virtual 3D implant and projected onto the 2D image to provide feedback related to the expected image content, if placed correctly. The software can also suggest alternative sizing and positioning to allow for configuration that results in displacement discrepancy and minimum leg length. If both the femoral and acetabular implants were placed and the 3D models were recorded in an image containing both components, the final orientation and positioning of the component can be calculated. [00107] [00107] In yet another exemplary configuration of the general system, the image-based real-time intraoperative navigation system is used without preoperative planning or imaging. Figure 4 outlines the workflow for this exemplary configuration. Intraoperatively, a reference IMU sensor set with radiopaque resources distributed in a known configuration is attached to the pelvis or femur of the patient being navigated. [00108] [00108] In the following exemplary configuration, the real-time tracking system with sensors can be omitted, as well as the preoperative image. In this configuration, the intraoperative image is used to obtain feedback related to the component's position and orientation. The registration of the bone, the image target and the component of each of the captured images and the reconstruction of the bone anatomy and / or reference points of the images are performed using the methods disclosed here for all configurations. Figure 5 describes the steps in this exemplary configuration. The position and orientation of the implant are recorded on the image by the registration process, as described in detail below. The automatic initialization of the implant registration can be done by aligning the 3D implant in a standard position and orientation in relation to the 3D reference points already registered. After an implant (or test) component is recorded on the acquired images and the patient's anatomy and / or landmarks, the orientation and position of the component can be calculated in the 3D coordinate system and reported to the operator on the screen. Adjustments can be made to the virtual 3D implant and projected onto the 2D image to provide feedback related to the expected image content, if placed correctly. The navigation software module can also suggest alternative sizing and positioning to allow for configuration that results in minimal displacement discrepancy and leg length. If the femoral and acetabular implants were placed and the 3D models were recorded in an image containing the two components, the orientation and final positioning of the component can be calculated. I. Preoperative Imaging [00109] [00109] An exemplary step in the exemplary configurations may include performing imaging of the patient's joint and creating 3D models for virtual surgical planning. In addition to traditional imaging methodologies that use static imaging modalities, such as X-rays, CT and / or MRI to create anatomical models of the patient, this exemplary disclosure can incorporate additional techniques to create the patient's bone as well as joint movement. In an exemplary embodiment, one or more X-ray images can be used to create a specific 3D patient anatomical model for reference points and measurements. At the same time, one or more tracking sensors can be attached to the patient and used in conjunction with the captured images to obtain joint movement data. This is described in more detail below. In another exemplary modality, if no sensor is available, X-rays or fluoroscopy can be used to capture an image of the patient's anatomy during various activities. The recorded images of the activities can be used to build 3D patient models coupled with kinematic data, which can be used for marking the reference point and dynamic and / or static surgical planning. In another exemplary modality, ultrasound can be used to create the patient's bone model for marking reference points and measurements. A. X-ray reconstruction of a joint [00110] [00110] The 3D reconstruction, or non-rigid record (shown in Figure 6), of anatomical models of multiple X-ray images plays an important role in understanding the patient's joint. However, a central problem in existing 3D reconstruction methods from X-ray images from multiple views lies in the following constraint: X-ray images are obtained with different types of markers or keys as calibration targets, in order to improve the calibration accuracy in relation to estimating the relative position and orientation of the image pairs. However, the main limitation of this calibration approach is that it is only capable of handling stereo radiographic images, including specific calibration targets. To potentially address the above issues, a practical method is disclosed without the need for a calibration target to estimate epipolar lines based on resource matches of X-ray images of the same object or objects in different views. [00111] [00111] The epipolar geometry between two images is the intrinsic projection geometry most often determined by finding corresponding pixels in one image, given a set of pixels in the other image. This can be determined by computing the fundamental matrix that describes the projective transformation between corresponding pixels in pairs of images. To estimate epipolar lines between pairs of images, resource correspondences are employed that involve finding the projections of the same scene points in both images acquired in different views. However, matching the corresponding pixels or features in the biplane X-ray images is an especially challenging problem, because the corresponding information can appear in different regions and shapes of each image. To this end, correspondences of hybrid resources can be established through multi-view X-ray images. [00112] [00112] An exemplary mode of resource correspondence can be composed of: (1) resource detection, (2) resource description and (3) resource correspondence. A set of discriminatory features at lightning inputs [00113] [00113] In biplane X-ray images, feature detection can be used as the primary process of determining feature matches to extract salient features represented in the image, such as points, borders, lines, fragments, or any mixture thereof. The textures in the image, which are directly related to the local projecting information of the resources, are critical to efficiently perform the resource matches. However, resource correspondences in minimally textured bone structures in X-ray images can suffer degradable repeatability and stability of their performance in non-textured regions due to the lack of local salient information. [00114] [00114] As shown in Figure 9, the hybrid resource matching method can provide the ability to improve the accuracy of estimating epipolar lines in the image pairs. For this reason, the exemplary method of resource matching can employ the method of external removal to perform a better estimate of the epipolar line. More specifically, the hybrid resource matching method reduces the population of outliers compared to the results of the grid-based statistical motion approach, as shown in Figure 8. This hybrid matching method can provide the ability to improve the accuracy of estimation of epipolar lines in the image pairs. [00115] [00115] The true matches or non-discrepancies obtained from the hybrid match described here can be used to calculate the fundamental matrix that can be estimated using a random sample consensus scheme (RANSAC) in which iterative random selections of 8 are established. matches. In each selection of the RANSAC scheme, the fundamental matrix is estimated, and its precision is evaluated considering the cardinality of the subset of the candidate matches. Once the best correct solution of the fundamental matrix has been found, the epipolar lines (considering the knowledge of the internal camera parameters) can be determined using the basic properties of the fundamental matrix in relation to whether any pair of points x and x ^ 'in the two images correspond, then x 'is on the epipolar line l ^' = Fx corresponding to the point x where F denotes the fundamental matrix. These epipolar lines can be used to reconstruct 3D models of bone structure using the geometric relationship between the world points and their projections on the image planes, as described below. [00116] [00116] An exemplary alternative method of calculating feature points and correspondences between X-ray views can use a priori information related to the anatomy being photographed and the expected properties of the anatomy image. This alternative exemplary method uses models in a statistical way, having point correspondence in all anatomical samples that are incorporated into the model. For each shape model having a corresponding CT image, digitally reconstructed radiographs (DRRs) can be simulated in a plurality of known views. Each DRR is a simulated X-ray image of the patient's anatomy with known camera parameters. For each DRR, the position of the patient's anatomy is also known, with specific correspondence to a statistical shape model, in relation to the image plane. For each view, feature descriptions for each vertex in the anatomical model can be calculated by determining the location of the vertex in the DRR image and calculating the desired feature information in the image coordinate of the projection. The image coordinate of the projection is determined by drawing a line from the origin of the camera, through the apex of the shape and in the image plane. [00117] [00117] After image calibration, the reconstruction process estimates the 3D pose of the patient's bone in the different views of the image. This can be done by automatically selecting - using the same a priori or similar data, as previously described - a predefined set of 2D landmarks representing projections of 3D anatomical landmarks in the image data set. Several projection points of anatomical landmarks are identified in at least two images from the image data set. The corresponding points on the two images can then be used to calculate 3D reference points in three dimensions using the fundamental matrix previously calculated between the two images. A list of bone models from a statistical bone atlas can then be aligned to the calculated 3D landmarks, thus registering them in the patient space. Subsequently, a bone model of the template can be selected to begin the reconstruction process. Given the bone poses of the patient extracted in the different images, 3D graphic simulations of the radiological scenes used to capture the image data set can be created. The X-ray source can be represented by a camera in perspective, simulating the radiological beam divergence, and can be placed at the focal length distance of the image to the projection plane. Within the camera's field of view, bone models from the atlas can be placed separately in the 3D bone poses extracted from the images and bone projection images can be synthesized. The synthesized bone contours can then be compared to radiographic images. The atlas bone model that produces the synthesized bone contour distances closest to the patient's radiological bone contours can be selected as an initial reconstruction template. [00118] [00118] The selected bone template can be transformed to better represent the patient's anatomy. In the simulated radiological scenes, the radiological images can be placed on the projection planes and rays can be generated between the location of the X-ray source and the radiological bone contour points. The model bone points can then be selected for each image ray based on the distance threshold from the template points and the normal angle threshold (90 - α) in relation to the rays. The 3D target set points can be calculated by moving the selected points in a normal direction to the model's surface. The distance moved can be the distance between the radius and the template vertex closest to the radius. The template can then be transformed so that the distances between the selected template points and their corresponding 3D target points can be minimized. After that, the template can be transformed by optimizing the values of major components of the bone atlas, to minimize the distance between selected points of the template and their corresponding 3D target points. Optimization can be done using any direct or heuristic search algorithm. This process can be repeated for a predetermined number of iterations or when there is no more significant deformation. The values of distance d and angle α can start with larger values for gross deformation and then can decrease linearly for fine-tuning with each iteration. [00119] [00119] Alternatively, a machine learning structure can be created using the DRR data described earlier. In this context, the main expected components of the reconstructed bone can be predicted from the image data and the initial pose. In this context, a properly structured neural network can be trained using the DRR images and poses as input, and the main components of the corresponding anatomical model. By generating a plurality of training sets and using these training sets to train a sufficiently deep neural network, the trained network can be used to predict the shape of the bone models initialized in the newly presented calibrated X-ray images (see Figure 12). B. Dynamic imaging with static X-ray and motion sensors [00120] [00120] In this exemplary modality, the patient can use one or more motion detection units, such as IMUs, comprising one or more accelerometers, gyroscopes and / or magnetometers, which emit rotation and / or position of the sensor. The sensor can transmit this data wirelessly to a processing device (telephone, tables, PC or similar). The X-rays can then be captured, where each X-ray image contains at least one IMU sensor and a portion of the patient's anatomy. These sensors can be attached externally, using an enclosure or flexible band or any other means of attachment including, without limitation, adhesives. During imaging, sensors and bones are captured and visible in multiple images. The image sequence calibration can then be performed by locating points on the IMU sensor with the captured image corresponding to the known points in an IMU board design. The determination of the corresponding points and regions can be carried out automatically. These points in the image can correspond to components on the circuit board, such as resistors, capacitors, chips, routing, or any other resource that can be distinctly identifiable in one or more X-ray images and on the circuit board, as shown in Figure 13. Using the sensor for calibration, the bones can be reconstructed using the X-ray reconstruction methods outlined here or any other method that may be familiar to specialists in object reconstruction and non-rigid registration. The reconstructed bone surfaces, together with the sensors recorded in the images, can be used to initiate a motion capture session, consisting of at least one bone and sensor, which were recorded in 3D space through bone X-ray reconstruction and recording sensor for the same image (s) used for reconstruction, providing information related to the sensor (s) in the bone (s). Using this relative information, the sensor data can now be directly related to the bone data. In this way, static X-ray images can be used to initialize a sensor-based motion capture system and used to capture dynamic 3D joint information. This exemplary process is illustrated in Figure [00121] [00121] The general structure of the reconstruction can comprise one or more of four parts, as shown in Figure 15: (A) Image processing, which extracts resources from fluoroscopic images; (B) Initialization, which estimates the initial pose of the 3D model using a hybrid classifier integrating the k closest neighbors (KNN) and the support vector machine (SVM) (can use other machine learning techniques to train and classify images ); (c) Optimization, which determines the ideal pose and shape of the 3D model, maximizing the measure of similarity between 2D X-ray fluoroscopy and the reconstructed 3D surface mesh model (the similarity measure is designed as a new function of energy including edge score, region score, homogeneity score and multiple body record score); and (D) 3D shape analysis, which represents the training data set of 3D surface mesh models with a non-linear statistical model called the core core component analysis (KPCA). [00122] [00122] The creation of anatomical information from dynamic fluoroscopic image data begins with the acquisition of fluoroscopic image. As part of this image acquisition, the subject / patient can be seen in any number of positions that may include a deep knee curve and opposite end points. After image acquisition, an image processing substep can be performed. [00123] [00123] Using a calibration target, it is possible to estimate the distortion and remove it from subsequent images as part of the image processing substep. An exemplary step in this procedure may include estimating the geometric distortion of any 2D image. By taking an X-ray of a known rectangular grid of metal granules, you can estimate a 2D spatial transformation for each small square subimage that is bounded by four granules. Using standard techniques in removing geometric distortion, a local bilinear model can be used to model spatial mapping, as well as gray level interpolation. Once 2D distortion has been removed, the effective origin-to-image plane distance (focal length) can be calculated by a two-plane calibration grid with a known offset between the planes. [00124] [00124] Figure 16 illustrates a fluoroscopic image of a geometric calibration grid before and after removing geometric distortion. As part of this substep, it is possible to calculate the bilinear transformation for each set of four grid points that transforms the image positions of the granules in the left image into regularly spaced grid locations on the right. Clearly, the calibration procedure removes pin cushion distortion so that the grid points are along straight lines. [00125] [00125] After image processing, an initialization sub-step can be performed to determine the initial pose of the average model. Initialization can be based on a hybrid classifier that combines the nearest neighbor k and the supporting vector machine, as shown in Figure [00126] [00126] As depicted in Figure 60, two primary reconstruction methods were developed to construct the 3D patient's anatomy from fluoroscopy images. A first method, Method 1, comprises an estimate of form and sequential pose, while a second method, Method 2, comprises reconstruction using E- Or Tree (AoT). The following is a more detailed discussion of each of these models. [00127] [00127] A 3D reconstruction of shape estimation and sequential pose can be based on a nonlinear statistical model, that is, core core component analysis (KPCA). When designing the training data in the high-dimensional core space, the shape of the 3D model can be represented by a vector of shape parameters, as shown in Figure 62. As part of this method, an optimization process can be performed in which the optimization determines the shape of the 3D model and places parameters from a sequence of monoplane fluoroscopic X-ray images, as shown in Figure 63. The optimization can be based on a new energy function, which combines the border, region, homogeneity and multibody registration scores to measure the similarity between the 3D model and the 2D X-ray image, as shown in Table [00128] [00128] Subsequently, the 3D model can be reconstructed by a pre-image approximation, because the map between the entry and resource space points is not necessarily known. It is preferable to reconstruct the pre-image of the corresponding test point based on the distance restriction in the entry space. This can be achieved by establishing the relationship between the input-space distance and the resource-space distances, as shown in Figure [00129] [00129] Alternatively, as shown in Figure 65, the reconstruction can be performed with the AOT technique. Initially, the geometry space is decomposed by extracting volumes of interest (VOI) as volumes in which part templates can exist. Each VOI can also be divided into a set of overlapping subvolumes, which can be used as bounding volumes for the placement of part templates. Examples of subvolumes are shown in the node on the left side of Figure 65. The "E-ou tree" can be generated recursively by partitioning volumes and representing partitions by E-ou node pairs. The "node or" can connect to all "nodes and" that divide the volume represented by that "node or" into two subvolumes. The "knot or" can also connect to two sets of leaf nodes, where on each node a surface is placed by either inscribing the volume or on the surface perpendicular to the direction of depth. Each “node e” can connect two or more nodes, each representing one of the two smaller sub-volumes occupying the current sub-volume. This tree starts from a "node or" root that represents the volume of interest (VoI), and continues to grow until the subvolumes are divided into a size limit. Using the surface as bounding boxes, the appearance of part templates can be further defined. The possible appearance for each part model can also be represented by an "e-ou tree", where "e" represents composition e "or" represents deformation. “Us and” layers [00130] [00130] As shown in Figure 66, the volume of interest (VoI) is determined by detection. The shape of a generic model can be learned from different known poses, optimizing the gain of information. Then templates can be projected onto 2D image planes as active contours, which deform on the image planes. Leaves of the "e-tree" appearance can be projected onto 2D image planes as active outlines. At the part level, templates can perform conversion to plane, rotation, which is called 3D deformation. Projected active curves can also deform in 2D. Both 2D and 3D deformations can be guided to maximize information gain. By projecting the deformed active curves back to the object plane, the 3D model can be reconstructed. [00131] [00131] As shown in Figures 67-69, an alternative to using the deformation in a statistical way can be to directly identify the resources in the image and use the so-called "e-tree" to identify and deform the shape (see Hu, Wenze and Song-Chun Zhu ". Learning 3D object templates by quantifying geometry and appearance spaces." IEEE transactions in pattern analysis and machine intelligence 37.6 (2015): 1190-1205., Whose disclosure is incorporated here by reference In the previous publication, the shape parameters for bone anatomies are dictated by the structure of AoT and the identification of these structures in fluoroscopy charts. [00132] [00132] It is worth mentioning that, for the knee, it is necessary that, at least, the knee portion of the joint be created (distal femur and proximal tibia). However, the same approach can be applied to any joint. D. Image processing [00133] [00133] Since fluoroscopy is prone to image distortion, it may be desirable to correct this distortion before analyzing the image data. Using a calibration target, this distortion can be estimated and removed from subsequent images. A step in the calibration procedure can include estimating the geometric distortion of any 2D image. By taking an image of a known rectangular grid of metal beads, a 2D spatial transform for each small square subimage that is bounded by four beads can be estimated. Using standard techniques in removing geometric distortion, a local bilinear model can be used to model spatial mapping, as well as gray level interpolation. After the 2D distortion has been removed, the effective source-to-image plane distance (focal length) can be computed by a two-plane calibration grid with a known offset between the planes. Figure 16 illustrates the fluoroscopic image of a geometric calibration grid before and after removing the geometric distortion. A bilinear transformation for each set of four grid points can be calculated that transforms the positions of the granule image on the left image into regularly spaced grid locations on the right. This correction can be applied to each fluoroscopic image acquired during the procedure. Distortion correction may not be necessary for planar X-ray images. Startup [00134] [00134] Initialization can be performed to determine the initial pose of the average model. Initialization can be carried out on the basis of a hybrid classifier that combines the nearest k neighbors and the supporting vector machine. Other options may include manually initializing the models or using other machine learning structures, such as CNN or similar deep learning structures, to train and classify poses from images. The output of the initialization step may comprise a template model and the appropriate pose of the model in relation to the image plane in at least one frame of the fluoroscopy images. Optimization [00135] [00135] The optimization can include determining the shape of the 3D model and pose parameters from a sequence of monoplane fluoroscopic X-ray images, as shown in Figure 17. The optimization can be based on a new energy function, which combines the border, region, homogeneity and multiple-body registration score to measure the similarity between the 3D model and the 2D X-ray image, as shown in Table 1. The hybrid energy function does not require time-consuming DRR generation or 2D segmentation prone to errors. Pre-Image [00136] [00136] The 3D model can then be reconstructed by a pre-image approximation, because the map between the input and resource space points is not necessarily known. The reconstruction of the pre-image of the corresponding test point can be based on the distance restriction in the entry space. This can be achieved by establishing the relationship between the input-space distance and the resource-space distances, as shown in Figure 18. II. Surgical Planning A. Static surgical planning [00137] [00137] In any of the configurations of the exemplary systems disclosed here, the relevant surgical reference points can be calculated manually and / or automatically (see Figure 19), where these calculated surgical reference points can be used to establish a coordinate system to measure implant placement. [00138] [00138] Before placing or guiding the placement of a surgical implant, it may be desirable that a virtual surgical plan be created through a virtual modeling process or surgical planning. It may be desirable for virtual modeling to be performed with 3D templates of identical implants to be used in surgery. However, if that implant is not available, modeling can be done independently of the implant, using generic generic implant templates, which can be designed to mimic the shape and size of known surgical implants. [00139] [00139] The virtual modeling program can receive patient-specific 3D models from one or both an automatic segmentation program and a non-rigid registration program. In the context of a hip joint, patient-specific 3D models can include the pelvis and the femur, which are both inputs to an automatic reference point marking program. This automatic reference point marking program calculates anatomical reference points relevant for placing the implant on 3D models of the femur and pelvis using regions of similar anatomy present in a statistical atlas and local geometric surveys. [00140] [00140] In the context of automatic placement of the femoral stem using distal fixation, as shown in Figure 80, the automatic reference point marking may include the definition of axes in the femur and in the implant. In relation to the femur, the anatomical femoral axis (AFA) can be calculated, followed by the proximal anatomical axis (AAP). The proximal neck angle (PNA) can then be calculated, which is defined as the angle between the AFA and the PNA. In relation to the femoral implant, the implant axis is along the length of the implant stem and the implant neck axis along the length of the implant neck. Similar to the PNA of the femur, the implant angle is defined as the angle between the implant axis and the implant neck axis. The implant can then be chosen with an implant angle closer to the PNA. The implant adjustment angle (IFA) can then be defined as the intersection of the proximal anatomical axis with a vector drawn from the center of the femoral head at the chosen implant angle. [00141] [00141] When using automatic positioning of the femoral stem using distal fixation and the calculated anatomical reference points, as shown in Figure 80, an implant sizing step can be used to determine / estimate, for the appropriate implant sizes, the femoral components. The implant size can be chosen by comparing the width of the implant with the width of the intramedullary canal and selecting the implant with the width most similar to the intramedullary canal. After that, the program can proceed to an implant placement step. [00142] [00142] In an exemplary implant placement step for a distal fixation femoral stem, based on the surgeon's preferred surgical technique and previously calculated anatomical landmarks, the initial implant position can be determined / chosen for all implanted components relevant. A resection plan can then be created to simulate the proximal femoral osteotomy and the implant fit can be assessed. Adjustment assessment can be performed by analyzing the cross sections of the implant and intramedullary canal of the femur aligned at variable levels along the implant axis. The implant can be aligned with the femur by aligning the implant axis with the anatomical femoral axis and then moving the implant so that the implant neck is in the general location of the proximal femur neck. The implant can then be rotated around the anatomical femoral axis to achieve the desired anteversion. [00143] [00143] As part of this exemplary implant placement step, an iterative scheme can be used that includes using an initial "instructed assumption" for implant placement as part of a kinematic simulation to assess the placement of the "instructed assumption". In an exemplary form, kinematic simulation can take the implant (based on the placement of the chosen implant) through a range of motion using estimated or measured joint kinematics. Consequently, kinematic simulation can be used to determine impact locations and estimate the range of motion resulting from the implant after implantation. In cases where the kinematic simulation results in unsatisfactory data (for example, unsatisfactory range of motion, unsatisfactory imitation of natural kinematics, etc.), another location for implant placement can be used, followed by a kinematic analysis, to further refine the implant placement until a satisfactory result is achieved. [00144] [00144] In the context of automatic placement of the femoral stem using a pressure fitting and three contacts, as shown in Figure 81, the automatic reference point marking may include the definition of axes in the femur and in the implant. In relation to the femur, the anatomical femoral axis (AFA) can be calculated, followed by the proximal anatomical axis (AAP). The proximal neck angle (PNA) can then be calculated, which is defined as the angle between the AFA and the PNA. In relation to the femoral implant, the implant axis is along the length of the implant stem and the implant neck axis along the length of the implant neck. Similar to the PNA of the femur, the implant angle is defined as the angle between the implant axis and the implant neck axis. The implant can then be chosen from among several implants as having an implant angle closer to the PNA. The implant fitting angle (IFA) can then be defined as the intersection of the proximal anatomical axis with a vector drawn from the center of the femoral head at the chosen implant angle. [00145] [00145] When using the automatic positioning of the femoral stem using pressure fitting, three contacts and the calculated anatomical reference points, as shown in Figure 81, an implant sizing step can determine / estimate the appropriate implant size for the components pelvis and femur. The implant size can be chosen by aligning the implant with the femur, aligning the implant axis with the anatomical femoral axis. The implant can then be rotated to align its neck axis with the femoral neck axis. The implant can then be moved to an anatomically suitable position within the proximal femur. After that, the system can proceed to an implant placement step. [00146] [00146] In an exemplary implant placement step for a pressure fitting femoral stem, based on the surgeon's preferred surgical technique and previously calculated anatomical reference points, the initial implant position can be determined / chosen for all components relevant deployed. A resection plan can be created to simulate the proximal femoral osteotomy and the fit of the implant can be assessed. The evaluation of the fit can be performed by analyzing an outline of the implant and the intramedullary canal of the femur. The contour can be created by the intersection of the intramedullary canal with a plane normal to both the anatomical axis and the femoral neck axis, passing through the intersection point of the anatomical axis and the femoral neck axis, producing a contour. When the contours of the implant and the intramedullary canal are generated, only implants with widths less than the width of the intramedullary canal in the same location are maintained, resulting in many possible correct implant sizes. The group of possible sizes can be reduced using two strategies, reducing the mean square distance error between the implant and the intramedullary canal. The first strategy minimizes the mean square error (MSE) or other metric error of the distance between both the medial and lateral sides of the implant and the intramedullary canal. The second strategy minimizes the MSE of the distance between the lateral side of the implant and the intramedullary canal. [00147] [00147] As part of this exemplary implant placement step, an iterative scheme that includes using an initial "instructed assumption" to deploy the placement as part of a kinematic simulation to assess the placement of the "instructed assumption" can be used. In an exemplary form, kinematic simulation can take the implant (based on the placement of the chosen implant) through a range of motion using estimated or measured joint kinematics. Consequently, kinematic simulation can be used to determine impact locations and estimate the range of motion resulting from the implant after implantation. In cases where the kinematic simulation results in unsatisfactory data (for example, unsatisfactory range of motion, unsatisfactory imitation of natural kinematics, etc.), another location for implant placement can be used, followed by a kinematic analysis, to further refine the implant placement until a satisfactory result is achieved. [00148] [00148] In an alternative form of a surgical planning program, modeling does not need to require a database of 3D CAD models of the implant. Instead, the program can calculate anatomical diameters and depths of the acetabular cup. The program can use a set of generic cup implants (hemispheres) to model the placement of the cup in relation to the surgical reference point (see Figure 21). B. Dynamic surgical planning for the knee [00149] [00149] The surgical planning program, although previously described in detail for the hip, can also be used for any other candidate joint for arthroplasty - such as, without limitation, knee, hip, ankle, elbow, shoulder or similar. For many joints, specifically the knee, it may be important not only to analyze the static geometry and landmarks during modeling, but also the dynamic information coupled to the joint's soft tissues. The virtual modeling program uses sensor movement data and 3D data captured during the preoperative image to determine the ideal size and position. [00150] [00150] With reference to Figure 59, the subsection of bone and soft tissue reconstruction 22 may include the prediction of soft and normal tissue anatomy using the dynamic images obtained in subsection 21. As part of the standard implant design, or implants and patient-specific instruments, a CT or static MRI is required to extract joint morphology. However, the morphology is usually altered by disease or deformity. In the case of knee osteoarthritis, cartilage can be lost and the osteophytes present alter the knee morphology. The use of static CT and MRI may not accurately describe the condition of the joint ligament. For example, the collapse of the medial compartments in the knee, in addition to the presence of osteophyte growth, alters the dynamic behavior of the medial collateral ligament (CML) and the lateral collateral ligament (LCL). Given these major changes in soft tissues and bones, the extraction of bone contours becomes difficult, inaccurate and, at times, impossible. In this situation, statistical atlas of a specific population can be used to predict the original shape of the deformed bone, as well as accurately predict the location of the ligaments and then dynamically extract the design parameters and curvature for that specific patient. Instead of relying on static images to generate specific patient implants and instruments, the present exemplary modality uses static images in addition to kinematic data (dynamic data) to generate implants and instruments that are optimized to replicate the patient's anatomy and kinematics. Implants and patient-specific instruments of the prior art are, at best, optimized only to replicate the patient's anatomy, but ignore the kinematics or stop using the kinematics as an element of the final shape of the orthopedic implant. [00151] [00151] Going back to Figure 70, as part of the soft tissue reconstruction associated with the virtual bone model, the locations of the ligaments can be extracted from the image data. In particular, bone and ligament surface models can be reconstructed from MRI. Bone models can be added to a statistical atlas and each vertex can be marked as belonging to the attachment site or not based on the distance of the model from the ligament surface. The execution of this step for several subjects allows the creation of a probability map for each ligament attachment site (shown in the upper line of the ACL and PCL femoral fixation locations in Figure 70). On the map, each vertex of the bone atlas may have a probability of belonging to the attachment site. [00152] [00152] Referring to Figure 71, as an additional part of the bone and soft tissue reconstruction sub-step 22 (see Figure 59), contact maps for the femur and tibia can be created during deep knee flexion. For a single subject, both the femur and the tibia can receive vertex correspondence in the respective bone atlases. The pose of the femur in relation to the tibia can be updated at each flexion angle. In each pose, the vertices of the femur and tibia belonging to the contact regions can be determined based on the proximity of the articulated bone. The performance of this analysis on several subjects allows the creation of a probability map in each bone, in each flexion angle, of the contact regions. [00153] [00153] Figure 72 represents inputs and outputs associated with an exemplary method to determine the cartilage thickness within the contact regions during a deep knee curve (as recorded during dynamic imaging substep 21 in Figure 59). This method can be used to map the contact regions, determine the specific patient thickness and link this information to the normal statistical atlas created earlier. In this way, kinematics is coupled with morphology. [00154] [00154] Figure 73 represents a patient-specific cartilage map (obtained from kinematic analysis, substep 21 in Figure 59) showing severe loss of cartilage in the medial compartment. Creating a patient-specific implant with just that information would lead to poor implant functionality. Instead, the present modality can use statistical methods to estimate morphology before a deformity is created, thus allowing the extraction of specific curvature from a real patient (before pathology). And this estimated morphology leads to greater functionality of the implant. [00155] [00155] Referring to Figure 74, a flow chart is represented to estimate the cartilage thickness from dynamic data in accordance with the present disclosure. Given various poses of articulated surfaces - the knee, for example - the contact in each pose can be determined. Contact can be determined mainly using a small percentage of the closest points between each model. For each contact point, the distance between the point and the joint model can be determined. The cartilage thickness can then be estimated at X% of model 1, Y% of model 2, so that the sum of the two thickness values is equal to the total distance between the surfaces. The calculation of the thickness at each contact vertex in each pose provides a set of "known" thicknesses that must be kept constant during the estimation procedure. This set can be considered a convex set 1 in the projection algorithm in convex sets (POCS). Convex set 2 is the cartilage atlas, previously calculated from a priori data sets - these cartilage atlases may include normal anatomy, specific pathological cohorts (varus, valgus in the knee) or a combination thereof. According to the POCS algorithm, convex set 1 is projected on convex set 2, the result of which is projected back onto convex set 1 - this is repeated until the result converges. In the described algorithm, the projection on the atlas updates all vertices belonging to the cartilage. If the result does not converge, the vertices belonging to the cartilage are configured for convex set 1 (the "known" thicknesses) and projected back into the atlas until convergence is achieved. When converged, the cartilage surface in each articulated bone model is exported. This routine allows an accurate cartilage estimate, using dynamic data to capture complete contact information. [00156] [00156] Returning to Figure 59, after the reconstruction of the bone shape, the location of the ligaments can be predicted using the atlas of the ligament shape, the atlas of the shape has the ability to capture the geometrical soft tissue sites through the population, together with the probability that each point is a geometric ligament site, as shown in Figure 74. The calculated ligament insertion points can then be used to calculate the ligament length envelope during kinematic activities, as shown in Figure 75. [00157] [00157] The basis for cartilage estimation can be a statistical model that contains a medium cartilage template and uses information from segmented femur and tibia models to locally deform the medium cartilage template. The average cartilage template can be the average cartilage thickness calculated from a database of manually segmented cartilage models. Each thickness value has an index associated with it, corresponding to a point on the bone atlas, which is used to locate that value. By adding the average template to a new bone model, each point on the bone can be distorted out along the normal direction a distance corresponding to the average thickness from the template at that location. The average cartilage template can be adjusted only when the femoral and tibial cartilages overlap. In this case, cartilage thickness can be reduced globally by a small factor and in overlapping areas by a larger factor. This process iterates until there are no areas of overlap. [00158] [00158] Using estimated cartilage maps, together with the measured joint deformity, the location of cartilage loss can be determined and the amount of cartilage loss can be estimated by projecting the patient's cartilage onto the normal cartilage model, as shown in Figure 76. By correcting the amount of cartilage loss, the joint can be returned to its normal alignment. This change in joint alignment directly affects the length and looseness of the ligament. For example, loss of cartilage on the medial side will lead to laxity in the medial collateral ligament and increased tension in the lateral collateral ligament, as shown in Figure 77. The restoration of normal joint alignment change in the length of the ligament can be calculated (see Figure 78). Using morphology and soft tissue information, the kinematic model closest to the normal kinematic database can be selected. [00159] [00159] As an example, the determination of normal healthy kinematics can be through the use of deep neural networks, where the network can be trained by movements performed by healthy joints. The deep neural network can receive pathological movement input and determine the ideal healthy kinematics (see Figure 73). [00160] [00160] Referring to Figure 79, a flowchart is provided that describes an exemplary process for calculating patient specific ligament stiffness as part of the normal kinematics predictive element in Figure 59. Exemplarily, discussed in relation to a procedure In ankle arthroplasty, a number of aspects of passive kinematic movement can be recorded that include, without limitation, movement, speed of movement, acceleration of movement and length of the ligament during movement. These aspects can be used as inputs for a passive motion model, which can also receive inputs related to ligament stiffness and tissue mass moved during movement. Using these inputs, the passive motion model can predict the force required to move tissues. And that predicted force can be compared with a measured force to analyze whether the ligament stiffness values should be adjusted so that the predicted force is equal to the measured force. In circumstances where the predicted and measured forces are not the same or very close, the ligament stiffness values and / or mass inputs can be optimized and updated to allow for subsequent predicted force calculations. This process can be repeated until the predicted and measured forces fall within each other's acceptable tolerances. [00161] [00161] Using the information generated from the above dynamic and soft tissue analysis, a virtual template of the femur, tibia and tibia inserts can be chosen from a family of implants to determine the best size and positioning parameters, from so that the results of postoperative dynamics - movement, ligament length, tension and femorotibial contacts - can be optimized for the patient (see Figure 23). This optimization can begin with standard sizing and positioning, as determined by the geometry of the patient's joint. This initial positioning can then be adjusted automatically to take into account desirable (or undesirable) corrections to the patient's pathology and the effect of corrections on ligament locations and anticipated contact areas. The planning software will present the dynamic data predicted in the pre- and postoperative period and will allow the user to inspect the results. If not satisfactory, the user can change the position and or the size of the virtual implants, causing the software to re-analyze the predicted dynamic data that are presented to the user again. This process can continue until the user is satisfied. An example of unsatisfactory results would be if the choice of implant position or size caused a significant change in relation to the preoperative anatomy in the length of MCL or LCL, as predicted by the planning module. Such a large alteration may be indicative of excessively tight or loose ligaments in this flexion situation and may require a change in the surgical plan. The result of this step is a preoperative surgical plan optimized for the specific patient. III. Anatomy Record [00162] [00162] In 3D to 2D registration, the goal is to align a 3D surface to each frame of a monoplane fluoroscopic sequence or set of X-ray images (see Figure 24). The 3D model is the surface mesh model for the patient's anatomy generated from preoperative images. The pose of the 3D model for all frames in the sequence can be determined by optimizing an energy function. The energy function can consist of an edge scoring term, an intensity scoring term, a misalignment term and a collision detection term. The edge scoring term and the intensity scoring term show how well the projections of the 3D model fit the fluoroscopic image in relation to the edges and the intensity, respectively. The misalignment term and the collision detection term penalize misalignment and the collision between neighboring bones in the same frame. Other factors can be introduced in the energy function to use a priori information, such as the relative pose of multiple anatomies (pelvis and femur, for example), known hardware or any other factors relevant to the optimization of fit. [00163] [00163] The registration process can be performed in software (for example, a program) that requests input from the user. For example, the software may require the user to identify landmarks on the image that correspond to landmarks on the 3D surface of the bone or implant. [00164] [00164] An intraoperative procedure can begin with preparing the patient for intraoperative imaging. The imaging system can produce radiographic images, such as X-rays or fluoroscopy. [00165] [00165] As part of a first exemplary process (configuration A, see Figure 28), the patient's pelvis and / or femur can be attached with a fixation device (for example, bone pins) and a tracking device / sensor (for example, an IMU) can be attached to the fixtures. As an example, the fixture may comprise a base accessory and an extension connector that connects the tracking device to the base accessory. The base accessory can be attached to the patient's bone (eg, pelvis). The extension connector can be removed after registration to allow more space for the surgeon to operate. [00166] [00166] It should be obvious to those skilled in the technique of sensor fusion and data processing that the positions and / or orientation of the device can be determined from the sensor outputs using a Bayesian estimation algorithm. In an exemplary way, a direct Kalman filter can be used to predict positions and / or orientations based on sensor outputs. See US20170296115, which is incorporated here by reference. In another exemplary form, a Kalman error state filter can be used to predict the output error of the sensors based on the estimated orientations. It can be used to filter out erroneous or corrupted sensor data (eg, vibration-induced deviation) that can potentially produce the wrong result. [00167] [00167] An additional alternative exemplary estimation technique may use an event-based PID estimation technique that decreases sensor dependency when a corrupted signal is detected (for example, prolonging magnetic distortion). [00168] [00168] The software can choose the outputs based on the conditions of the sensors to produce the best result. [00169] [00169] The tracking device can be wired or wireless for data communication with a computing device with the software running on it. As an example, the tracking device may consist of a single or a combination of radiopaque resources for radiography. [00170] [00170] Radiopaque resources, which can be incorporated in the tracking sensors or as independent objects, can be arranged in any combination. As an additional example, radiopaque resources may include at least four in number and be arranged so that at least one of the resources is not on the same plane as the other resources. [00171] [00171] As part of a second exemplary process (configuration B, see Figure 29), an accessory can be used to extend the tracking device / sensor (for example, an IMU) of the fixture. Due to the limited viewing area of the radiographic imaging system, the accessory may include additional radiopaque features to aid registration and allow placement of the tracking device to ensure that radiopaque features are available for registration. The accessory may have radiopaque features built into it to assist with the registration process. [00172] [00172] As an additional example, as shown in Figure 30, the accessory can be incorporated with radiopaque resources that may be necessary for intraoperative registration. As an additional example, in a circumstance where the accessory has radiopaque features, the accessory can be used alone for intraoperative imaging and recording. The tracking device can be attached to the accessory after imaging. The accessory can work to reduce electromagnetic interference from the radiographic imaging system on the tracking device. [00173] [00173] As part of any exemplary process, multiple radiographic images can be taken. For example, an image where the pelvis and radiopaque features of the pelvis tracking device are in the viewing area of the radiographic imaging system and a second image where the femur and radiopaque features of the femoral tracking devices are in the viewing area of the radiographic imaging system. These images can be used for registration. [00174] [00174] After setting up the tracking device, a radiography can be performed using a radiographic imaging system. [00175] [00175] The image data can be transferred to a surgical guidance software, sending this image data wired or wirelessly over a network or using a physical transfer via an external storage device. The image data can be processed by imaging processing software, which can correct distortions in the images. V. Transfer of intraoperative data [00176] [00176] Intraoperative images can be transferred to an exemplary system disclosed here, running the reconstruction and registration software, from an imaging device using different methods, including, but not limited to: wireless (Bluetooth or Wi- Fi), transfer via image archiving and communication system (PACS), wired or remote transfer via portable device, such as a secure USB storage device. SAW. Intraoperative record [00177] [00177] The orientations of a tracking device / sensor (for example, an IMU) used in accordance with the present disclosure can be retrieved / discerned from radiographic images using a registration target (ie, an image target) ) associated with the tracking device that may include radiopaque features. The configuration of radiopaque features on the registration target can be known by surgical guidance software, as described here, which allows the software to calculate the 3D orientations of the registration target only from the fiducial markers when the radiographic images are analyzed by software. [00178] [00178] The orientation of the tracking device / sensor can also be determined by the sensors on the tracking device itself. The orientation produced from the sensors can be in a different coordinate system than the orientation calculated from the radiography image (s). The transformation between the two orientations into potentially different coordinate systems can be calculated so that the orientation determined by the tracking device can be transformed into the system and space of radiographic image coordinates, and vice versa. A. Image target for registration of the navigation system [00179] [00179] An exemplary image target according to the present disclosure can be scaled so that the scaling is appropriate to be visible in all expected image views without being excessively heavy (see Figure 34). This dimensioning may be the result of one or more analyzes performed on a representative population, in addition to simulated X-ray and fluoroscopic images, for the body region of interest, such as, without limitation, any of the joints of a potential patient's body. [00180] An exemplary image target according to the present disclosure can include one or more granules, which can be incorporated within a radiotransparent or radiotranslucent material, which can have a known shape and size. As an example, a known size can comprise a sphere with a diameter of 9.4 mm. The granules can be arranged in an asymmetric pattern along a non-flat surface, so that the precise configuration of the granules is known to facilitate the identification and registration of the image target in the acquired radiography images. The image target can comprise a bead template, to hold and retain the beads, and can allow rotation (for example, between zero and 180 degrees) relative to a tracking device, thus allowing the beads to fit in frames of predetermined image. The number of granules may vary, but preferably at least four granules are used which are not all on the same plane as once assembled in the granule template. A larger number of granules can provide greater redundancy, for example, 5-20 granules. Figure 35 shows an exemplary bead template that includes an asymmetric design pattern in accordance with the present disclosure when viewed from different orthogonal perspectives. The asymmetric design helps to reduce the chance of granules overlapping between different views of the radiographic images taken while the granule template is within the field of view. [00181] [00181] The exemplary image target can be used with or without a tracking device / sensor (for example, and IMU). The exemplary image target can include at least one locking feature, which allows the image target to be locked with a tracking / sensor device, using a reference piece, as seen in Figure 36. The reference piece can include at least two locking features, one intended for use with the image target and one with the tracking / sensor device. The assembly of the image target, reference part and tracking / sensor device places the sensor in a known orientation and position in relation to the image target. Therefore, when the 3D model of the image target is registered in the granules visible in the captured radiographic image (s), the position and orientation of the tracking device / sensor in relation to the registered 3D image target can be determined by knowing the assembly project. . Figure 36 shows examples of the image target, tracking device / sensor and reference set. The image target and the tracking device / sensor are locked to the reference part in a known orientation and known position. [00182] [00182] Figures 38 and 39 show two different views in AP and Judet with the image target, the reference piece and the tracking / sensor device in position in the respective radiographic images and the result of using these radiographic images to build and register corresponding 3D virtual models. The set (for example, image target, reference piece and tracking device / sensor) can be rigidly mounted on the patient's bone. This assembly can be performed percutaneously or using intra-incision fixation. The reference part can be designed to facilitate this fixation. In an exemplary alternative embodiment of a reference piece, this may imply having at least two holes designed to allow the surgical pins to pass through them, with the pins being configured to lock on the reference piece using conventional methods, such as, without limitation, use of fixing screws or similar devices. As it may be desirable to guide the placement of certain femoral components during a reconstructive surgical procedure, the set can also be rigidly attached to the femur, as shown in Figure [00183] [00183] The orientations of the tracking device / sensor (for example, and IMU) and the patient's anatomy can both be transformed in the space of the radiographic image and recorded together. This process may involve a first 3D to 2D recording step to record the patient's anatomy on the imaging plane. Then, a second registration step from 3D to 2D to align the reference set (image target, reference part and tracking device / sensor). After the registration is completed, the relative location and orientation of the anatomical tracking device becomes known. At that point, the tracking device can be used to track the patient's bone segment. This step is described in Figures 42 and 43. [00184] [00184] In an exemplary process of performing the patient record for a total hip arthroplasty procedure, a tracking device can be connected to the patient's pelvis via the fixation device, while another tracking device can be connected to the patient's femur through another fixation device. Radiographic images can be obtained for both bone segments (that is, the femur and pelvis for hip arthroplasty), together with the radiopaque resources of each image target. As referenced in Figure 42, the set is mounted on the patient's bone and a radiographic image (A) is taken. The position and orientation of the patient's bone are recovered through the registration process described previously (B). The position and orientation of the global frame, provided by the image target, are recovered. The offset between the position and orientation of the patient's bone and the position and orientation of the image target is calculated, and the patient's bone is tied to the image target (C). The tracking device attached to the image target reports its current orientations at the same time as the image is captured. The offset between the tracking device and the image target is calculated so that the tracking device's orientation is now in the global frame. Since the tracking device and anatomy are linked by the image target in the global frame, the patient's bone orientations can be recorded on the tracking device, and the tracking device can be used to track changes in orientation in the patient's bone. In this way, the image data can be transferred to a surgical guidance computer to perform the registration. Once registration is complete, the image target can be removed from the device or fixation set. [00185] [00185] With reference to Figure 43, the relationship between the tracking device and the radiopaque resources of the image target is known by the system. Radiopaque features can be incorporated or attached to a secondary device by attaching to the tracking device. The transformation from the fiducial marker locations to the orientation of the tracking device is defined as TiF, i = 1.2… The transformation from the 3D locations of the radiopaque resources to the 2D radiography image is defined as TIF for the resources radiopaque in the femur and TIP for radiopaque resources in the pelvis. The transformation to determine the orientation of the tracking device 1 attached to the femur based on the 2D radiography image is given as T1F x TIF, and the orientation of the tracking device 2 attached to the pelvis based on the 2D radiography image is given as T2F x TIP. The transformation of 3D patient bone models to 2D radiography is defined as TIFemur for the femur and TIPelvis for the pelvis. With these transformations, the orientation of the patient's bones can be recorded on the tracking devices attached to them. Femur: TIFemur → T1F x TIF Tibia: TIPelve → T2F x TIP C. Anatomical Patient Mapper (MAP) [00186] [00186] According to the present disclosure, an example of a patient's anatomical mapper (MAP) comprises patient-specific instrumentation that is manufactured to fit a specific orientation and position in the patient's anatomy. The PAM geometry can be created from a virtual 3D model of the patient's bone, which is created from previously obtained imaging, such as preoperative images. PAM can include one or more locking features designed to facilitate attaching a tracking device / sensor or a reference piece to hold a tracking device / sensor. Another PAM locking feature is patient specific and is designed to mate in a unique position and orientation with the patient's anatomy (such as against a patient's bone). When matching the patient-specific locking feature with the correct location / location of the patient's anatomy, the orientation and position / location of the attached tracking device / sensor must be known in relation to the anatomy. [00187] [00187] PAM can be incorporated with radiopaque features similar to the image target discussed here, so that by placing PAM in the patient's anatomy in the desired position and orientation, the position and orientation of PAM in relation to the image target can be known. This displacement can be used to check the length of the leg after implant placement. VII. Surgical Orientation [00188] [00188] According to the present disclosure, the previous tracking devices / sensors (for example, IMUs) can be used as part of the surgical orientation, as a procedure of total hip arthroplasty. In an exemplary way, surgeons can continue the surgical procedures typical of total hip arthroplasty, such as making an incision, performing resection of the femoral head and exposing the acetabular cup. The surgeon can connect one of the tracking devices to a pelvis fixation device (to attach to the pelvis) and another tracking device to the surgical instruments to be guided, including, without limitation, a drill, a cup impactor, a shoulder strap. scraping, cutting guide or any other instrument. The tracking devices can be configured to continuously send indicative guidance and / or translation data to the processing device (eg, computer, specialized machine, tablet, etc.) running the surgical navigation software. The relative orientation between the tracking devices can be represented as inclination / declination, and abduction / adduction angles or any other desirable values and be displayed on a display such as, without limitation, a computer monitor or surgical navigation display. The surgeon can use the tracking devices to know the guidelines of one or more surgical instruments for exemplary procedures such as acetabular resurfacing, impact of the acetabular cup during the placement of the test, and impact of the acetabular cup during the actual positioning of the final orthopedic implant, and verification of the orientation of the acetabular cup in the patient's anatomy. [00189] [00189] The surgeon can also use tracking devices for placement of the femoral stem, which may include attaching one or more tracking devices to a femur fixation device (to attach to the femur), and another connected tracking device to the surgical instrument. The surgical navigation system can use data from tracking devices to determine and guide surgical treatment guidelines, where surgical guidance can alert the surgeon in cases where surgical guidance can cause fracture of the femur. The surgeon can also use this surgical navigation system and tracking devices to place the femoral implant. The surgical guidance software can estimate the combined clinical inclination / declination, and implant abduction / adduction angles. A more detailed discussion follows. [00190] [00190] As an example, a first IMU comprising a tracking / sensor device, optionally having been previously mounted on an acetabular registration tool, can be mounted on a surgical tool at a known location. In an exemplary form, the IMU can be rigidly attached to a cup countersink with a known orientation in relation to the widening direction, so that the orientation of the cup countersink in relation to the pelvis is known and dynamically updated by several IMUs (for example, example, 1st IMU mounted on the cup countersink and second IMU mounted on the pelvis). [00191] [00191] The surgical navigation computer software program provides a graphical user interface (associated with the surgical navigation system) that can display virtual models of the patient's pelvis and a virtual model of the surgical tool in question, in this case a countersink cup (the virtual model of the patient's pelvis has already been completed according to the virtual modeling step, and the virtual model of the cup reamer or other surgical tool previously loaded into the system for the specific cup reamer and other surgical tools that can used), and can update the orientation of the pelvis and surgical tool in real time, through the display, providing position and orientation information to the surgeon. Instead of using a display, the present system may include surgical devices with indicator lights indicating to the surgeon whether the reamer is correctly oriented and, if not, which direction (s) the reamer needs to be repositioned to correctly orient the reamer according to the preoperative planning. After resurfacing using the cup countersink, the IMU can be removed from the cup countersink and rigidly attached to a cup insertion device with a known orientation relative to the direction of the inserter. The cup inserter can then be used to place the cup implant, with the IMUs continuing to provide acceleration feedback that the software uses to calculate the position to provide real-time feedback on the pelvis position relative to the cup inserter. As the holes are drilled in the pelvis before or after positioning the cup, the IMU (optionally mounted previously on a recording tool) can be rigidly attached to a surgical drill to ensure correct drill orientation in relation to the pelvis. An optional analog registration tool and set of IMUs can be used with the software system to assist in the placement of the femoral stem component. [00192] [00192] As an example, a first IMU can be mounted on another surgical tool in a known location. In an exemplary way, the IMU (optionally mounted previously on the femoral registration tool) can be rigidly attached to a surgical saw in a known location, so that the movement of the IMU corresponds correspondingly to the known movement of the surgical saw. Given that a second IMU is fixedly mounted on the femur at a known location, the IMUs work together to provide dynamically updated information to the software system about changes in the position (via acceleration data) of both the femur and the surgical saw. . [00193] [00193] The software program, as mentioned earlier, provides a display that allows the surgeon to view virtual models of the patient's femur and the surgical tool in question, in this case a surgical saw (the virtual model of the patient's femur has already been completed in according to the virtual modeling step, and the virtual model of the surgical saw or other surgical tool was previously loaded into the system for the specific surgical saw and other surgical tools that can be used), and is configured to update the orientation of the femur and the surgical tool in real time, through the display, providing position and orientation information to the surgeon. Instead of using a display, the present system may include surgical devices having indicator lights indicating to the surgeon whether the surgical saw is oriented correctly and, if not, which direction (s) the surgical saw needs to be repositioned to correctly orient the surgical saw to make the correct bone cuts consistent with preoperative planning. After making the necessary bone cuts, the first IMU can be removed from the surgical saw and rigidly attached to a countersink (to correctly ream the intramedullary canal) and then mounted on a femoral stem inserter with a known orientation in relation to the inserter direction. . The stem inserter can then be used to place the femoral stem implant into the countersunk intramedullary canal, with the IMUs continuing to provide feedback that the software uses to calculate the positions and orientations of the femur and stem inserter in real time and display the models virtual views of the femur and rod inserter, in relation to each other in real time, through the display, so that the surgeon can visualize the relative position and orientation of the surgical instrument in relation to the patient's anatomy without requiring a direct line of sight for the surgical site. VIII. Verification of intraoperative positioning [00194] [00194] During or after the placement of a final or test component or components, radiographic images can be obtained. The images can be used to detect the orientation and test position in relation to the anatomy through the 3D to 2D registration of the components and anatomy or reference points in the image plane. There can be two configurations of this step, depending on the surgical planning used. If no preoperative reconstruction is available for the patient, a first step can be performed to calibrate the scan image with the previously acquired intraoperative images and to calculate the position of the patient's 3D coordinate system in relation to the scan image. This process can be identical to the multi-view calibration and intraoperative reference point marking, as described in detail below. A. Using CAD models of 3D implants [00195] [00195] With the knowledge of a family and current implant sizes, the 3D positions of the implants can be calculated using the 3D to 2D image registration technique, as discussed here. For example, registering 3D image to 2D can be performed as described in Section III of the Applicant's Anatomy Register, using implant geometries as the forms to be registered. Additional metrics can be added to the scoring function to take into account the estimated position of the implants and the implant restrictions (for example, restricting the pose of the femoral stem to anatomy and the femoral cup; restricting the pose of the cup to anatomy and femoral stem). [00196] [00196] The image registration from 3D to 2D can be improved through inputs from the tracking devices, where the optimization algorithm can use the orientations from the tracking device to restrict and assist in the position estimation. As the 3D positions of the implants are preferably known, any spatial and metric measurements, such as leg length, can be computed directly from the registered 3D models. As an example, in the event of a leg length mismatch, the surgical guidance system can provide a family and alternative implant sizes to adjust for differences in leg length to get closer to each other. B. Generic template without implant CAD models [00197] [00197] As shown in Figure 49, the orientation of the implant can be tracked and determined by the surgical navigation system without the corresponding orthopedic implant CAD models. In an exemplary way, radiographic images of the post-orthopedic implant positioning of the patient's anatomy can be obtained. These images can be enhanced to allow the software to automatically detect the edges of the objects represented in the images, as well as differentiate the objects using the texture in the images, to determine the relative position of the objects in the images in relation to each other. Likewise, data from tracking devices can be used simultaneously with object detection from radiographic images to determine the relative position of the orthopedic implant (such as a femoral nail or an acetabular cup). These relative position data from 2D images can be registered in the 3D anatomical model through the 3D to 2D image register, as described in Section III of the Applicant's Anatomy Register. After registration in the 3D model, the orientation of the orthopedic implant can be checked. IX. Multiple View Intraoperative Imaging [00198] [00198] Multiple view intraoperative imaging may be necessary if no preoperative image is captured, but a 3D coordinate system or anatomical model is desired to assist in the placement of components. Intraoperative images can be acquired to capture both the patient's anatomy and the imaging target (as discussed earlier). A plurality of images can be acquired intraoperatively using conventional imaging modalities including, without limitation, X-ray imaging or fluoroscopy. As an example, for a pelvis, a set of two or more images (AP, Judet RPO, Judet LPO) can be acquired, where the compilation of images preferably contains all the surgical reference points necessary for the placement of an acetabular component. For a proximal femur anatomy, a set of two or more images (AP and lateral) can be acquired, where the compilation of the image preferably contains all the necessary surgical reference points of both the proximal femur and the intramedullary canal for placement of the femoral nail . A. Calibration of multiple views [00199] [00199] The calibration of multiple views, in an exemplary manner and in accordance with the present disclosure, may include a process of extracting the image-taking parameter to reconstruct a 3D scene from a set of "n" images. This information can then be used to reconstruct a set of 3D landmarks. An exemplary stereo camera calibration process is described in Figure 52. Input images can be enhanced and filtered, and an automatic target granule location algorithm (for example, radiopaque shapes) can be used to find the locations of several or all calibration target granules visible in the image (see Figure 53). The detected granules can then be used to calculate the pixel spacing, followed by estimating the pose of the 3D target in space. This can be achieved by initializing the position of the calibration target based on the view of the imager (for example, an X-ray view), such as using an a priori position that can be extracted from the location of the calibration target in relation to to a standard imaging view. This initial pose can then be optimized to achieve the pose that achieves the projected granule configuration in the input image (see Figure 54). An exemplary alternative to using a calibration target can include using corresponding points in each image. X. Marking of intraoperative reference point [00200] [00200] In this exemplary disclosure, the reference point marking comprises a process of extracting relevant surgical reference points that may be necessary for placing the implant from calibrated 2D images. A process flow chart is explained in Figure 55. In an exemplary way, a set of 2D reference points can be identified in a first 2D image, for example, the points can include left and right ASIS and pubic tubercle points in the AP image. The initial position of these landmarks in the image can be identified using an automatic algorithm that uses resource points extracted from 2D image (s), together with population statistics extracted from statistical anatomical atlases to calculate the geometric locations of these landmarks that they can be constrained, for example, by using edge and / or texture information and relative positions of these reference points to calculate the position of a reference point (see Figure 60). [00201] [00201] 2D reference points extracted in the first image can then be used together with the stereo calibration matrix to create an epipolar line for each reference point in a second image or subsequent image. The location of the reference point in the first image together with its location on an epipolar line in the second image can be fed with an optimization algorithm that can extract the 3D position of the reference points on the calibration target or capture the image from the coordinates (see Figure 57). Those familiar with multiple-view geometry will understand that there are several known methods for determining 3D positions from calibrated 2D images of the same scene, which are all within the scope of this disclosure. XI. Intraoperative planning based on intraoperative reference point marking [00202] [00202] According to the present disclosure, 3D reference points can be extracted from calibrated intraoperative images and can be used to calculate the relevant surgical axes and dimensions. In the context of a pelvis, this can include right and left ASIS and pubic tubercle points to calculate the directions of the anterior posterior, SI, anterior-posterior (AP) and medial-lateral (ML) plane and / or the center and dimensions of an anatomical acetabular cup. A surgical planning interface can then be presented to a user that allows selection of the desired implant sizes and orientations (see Figure 58). [00203] [00203] Following the above description, it should be evident to those skilled in the art that, although the methods and apparatus described herein constitute exemplary embodiments of the present invention, the invention described here is not limited to any precise modality and that changes can be made in such modalities without departing from the scope of the invention, as defined by the claims. In addition, it is to be understood that the invention is defined by the claims and any limitations or elements that describe the exemplary modalities set out in this document are not intended to be incorporated into the interpretation of any claim element, unless such limitation or element is explicitly stated . Likewise, it should be understood that it is not necessary to satisfy one or all of the identified advantages or objects of the invention disclosed here, in order to fall within the scope of any claim, since the invention is defined by the claims and since the advantages inherent and / or unforeseen circumstances of the present invention may exist even if they have not been explicitly discussed here. [00204] [00204] What is claimed is:
权利要求:
Claims (65) [1] 1. Method for tracking the movement of a body part, the method characterized by the fact that it comprises: collecting movement data from a body part repositioned within a range of motion, the body part having a sensor mounted on it of movement; collecting a plurality of radiographic images taken of the body part while the body part is in different positions within the range of motion, the plurality of radiographic images having the body part and the motion sensor within a field of view; and, building a virtual three-dimensional model of the body part from the plurality of radiographic images using an identifiable motion sensor structure within at least two of the plurality of radiographic images to calibrate the radiographic images. [2] 2. Method according to claim 1, characterized by the fact that the motion sensor comprises an inertial measurement unit. [3] 3. Method according to claim 2, characterized by the fact that the inertial measurement unit comprises a plurality of accelerometers, a plurality of gyroscopes and a plurality of magnetometers. [4] Method according to any one of claims 1 to 3, characterized in that the motion sensor is mounted rigidly on the body part. [5] 5. Method according to claim 4, characterized by the fact that the motion sensor is mounted outside an epidermis at least partially covering the body part. [6] 6. Method according to any of the claims 1 to 3, characterized by the fact that the motion sensor is rigidly mounted on the body part. [7] Method according to any one of claims 1 to 6, characterized in that the structure of the motion sensor comprises at least one of a resistor, a chip, a capacitor, a circuit board and an electrical conductor. [8] 8. Method according to any one of claims 1 to 7, characterized in that the radiographic image comprises an X-ray. [9] 9. Method according to any one of claims 1 to 7, characterized in that the radiographic image comprises a fluoroscopic image. [10] 10. Method according to any one of claims 1 to 9, characterized by the fact that the calibration of radiographic images is performed automatically. [11] 11. Method, according to claim 10, characterized by the fact that the automatic calibration of radiographic images is performed by a computer running a software program. [12] 12. Method according to any one of claims 1 to 11, characterized by the fact that it further comprises: collecting data from the motion sensor that can be used to determine at least one of the position and rotation of the motion sensor as a function of time. [13] 13. Method, according to claim 12, characterized by the fact that the data collected from the motion sensor are collected wirelessly. [14] 14. Method, according to claim 12, characterized by the fact that the data collected from the motion sensor are collected from a wire connected to the motion sensor. [15] 15. Method according to any one of claims 12 to 14, characterized by the fact that the data collected from the motion sensor is collected by at least one among a phone, a computer, a tablet and a portable memory. [16] 16. Method, according to any one of claims 1 to 15, characterized by the fact that it further comprises: registering in the three-dimensional space the motion sensor in the virtual three-dimensional model of the body part; and, correlating data collected from the motion sensor as a function of the position of the body part to create a virtual dynamic model of the body part that is repositionable to reflect the actual positions of the body part when repositioned within the range of motion. [17] 17. Method according to any one of claims 1 to 16, characterized by the fact that it further comprises: building a virtual three-dimensional model of the motion sensor using the plurality of radiographic images. [18] 18. Method, according to claim 17, characterized by the fact that the virtual three-dimensional model of the motion sensor is integrated with the virtual three-dimensional model of the body part to create a combined three-dimensional virtual model. [19] 19. Method, according to claim 18, characterized by the fact that it further comprises: correlating data collected from the motion sensor as a function of the position of the body part to provide dynamic movement to the combined three-dimensional virtual model. [20] 20. Method according to any of claims 1 to 19, characterized by the fact that collecting motion data includes recording at least one of changes in the position and rotation of the motion sensor as a function of time. [21] 21. Method according to any one of claims 1 to 19, characterized by the fact that collecting motion data includes recording changes in the acceleration of the motion sensor as a function of time. [22] 22. Method, according to any one of claims 1 to 21, characterized by the fact that it further comprises: displaying the virtual three-dimensional model of the body part to reflect changes in the position of the real body part in real time. [23] 23. Method according to any one of claims 1 to 22, characterized by the fact that the collected movement data is hourly. [24] 24. System for tracking the movement of a body part, the system characterized by the fact that it comprises: a motion sensor; and, a processor configured to be communicatively coupled to the motion sensor, the processor communicatively coupled to a plurality of modules, the modules comprising: a data receiving module configured to record motion data generated by the motion sensor, at least one of data reception module and the motion sensor horodating the motion data generated by the motion sensor; a radiographic image processing module configured to identify a common resource visible through a plurality of radiographic images in order to calibrate the plurality of radiographic images; and, a three-dimensional model module configured to process a plurality of radiographic images and create a virtual three-dimensional model of a visible object in at least part of the plurality of radiographic images. [25] 25. System according to claim 24, characterized by the fact that the motion sensor includes an inertial measurement unit. [26] 26. System according to claim 24, characterized by the fact that the motion sensor includes a plurality of accelerometers. [27] 27. System according to claim 24, characterized by the fact that the motion sensor includes a plurality of magnetometers. [28] 28. System according to claim 24, characterized by the fact that the motion sensor includes a plurality of gyroscopes. [29] 29. System according to any one of claims 1 to 28, characterized by the fact that it further comprises a display communicatively coupled to the processor and operative to display the virtual three-dimensional model. [30] 30. System according to any one of claims 1 to 29, characterized by the fact that it also comprises a radiographic image capture machine. [31] 31. Method for providing surgical navigation, the method characterized by the fact that it comprises: obtaining a plurality of radiographic images taken intraoperatively from multiple vantage angles that include a body part and at least one image target; register the part of the body in the intraoperative in a navigation system; calculate at least one of an orientation and a position of the body part in a three-dimensional coordinate system used by the navigation system; and, display a virtual model of a tangible item comprising at least a body part, a surgical instrument and an orthopedic implant, where displaying the virtual model includes changing in real time from at least one position and orientation of the virtual model to agree with a change in at least one of the position and orientation of the tangible item. [32] 32. Method, according to claim 31, characterized by the fact that: the virtual model of the tangible item comprises a three-dimensional model associated with the navigation system; and, the registration step includes registering a two-dimensional image of the body part in the three-dimensional model. [33] 33. Method, according to claim 31 or 32, characterized by the fact that the registration step includes identifying two-dimensional anatomical landmarks from the body part from the plurality of radiographic images and registering these two-dimensional anatomical landmarks with points three-dimensional reference points associated with a virtual three-dimensional model of the navigation system. [34] 34. Method, according to claim 33, characterized by the fact that registering two-dimensional anatomical landmarks with three-dimensional anatomical landmarks includes projecting three-dimensional landmarks onto a two-dimensional image. [35] 35. Method according to claim 34, characterized by the fact that designing the three-dimensional reference points includes adjusting a pose of the three-dimensional model so that the distance between the selected two-dimensional reference points is reduced in relation to the distance between the points corresponding three-dimensional reference points. [36] 36. Method according to any one of claims 31 to 34, characterized in that the registration step includes using a patient-specific instrument that correctly engages the body part in just a single position and orientation. [37] 37. Method according to claim 36, characterized by the fact that the patient-specific instrument includes an inertial measurement unit. [38] 38. Method according to claim 36, characterized by the fact that the patient-specific instrument includes a plurality of accelerometers. [39] 39. Method according to claim 36, characterized by the fact that the patient-specific instrument includes a plurality of gyroscopes. [40] 40. Method according to claim 36, characterized by the fact that the patient-specific instrument includes a plurality of magnetometers. [41] 41. Method according to any one of claims 31 to 40, characterized by the fact that it further comprises: obtaining a plurality of radiographic images taken in the preoperative from multiple vantage angles including the body part; and, create a virtual three-dimensional model of the body part from the plurality of radiographic images. [42] 42. Method, according to claim 41, characterized by the fact that it also comprises: calibrating the plurality of radiographic images taken in the preoperative period before the creation of the virtual three-dimensional model. [43] 43. Method, according to claim 41 or 42, characterized by the fact that it also comprises: planning a surgical procedure using the virtual three-dimensional model. [44] 44. Method according to any one of claims 31 to 43, characterized by the fact that it further comprises: collecting movement data from the repositioned body part within a range of motion, the body part having a motion sensor mounted on it . [45] 45. Method according to claim 44, characterized in that the motion sensor comprises an inertial measurement unit. [46] 46. Method, according to claim 45, characterized by the fact that the inertial measurement unit comprises a plurality of accelerometers, a plurality of gyroscopes and a plurality of magnetometers. [47] 47. Method according to any one of claims 44 to 46, characterized in that the motion sensor is mounted non-rigidly on the body part. [48] 48. Method according to claim 47, characterized in that the motion sensor is mounted outside an epidermis at least partially covering the body part. [49] 49. Method according to any of claims 44 to 46, characterized in that the motion sensor is rigidly mounted on the body part. [50] 50. Method according to any one of claims 31 to 49, characterized in that the plurality of radiographic images comprises a plurality of X-ray images. [51] 51. Method according to any one of claims 31 to 49, characterized in that the plurality of radiographic images comprises a plurality of fluoroscopic images. [52] 52. Method according to any one of claims 31 to 52, characterized by the fact that it also comprises the calibration of the plurality of radiographic images obtained during the operation. [53] 53. Method according to claim 52, characterized by the fact that the calibration of the plurality of radiographic images is performed automatically by a computer running a software program. [54] 54. Method according to any one of claims 44 to 53, characterized by the fact that it further comprises: collecting data from the motion sensor that can be used to determine at least one of the position and rotation of the motion sensor as a function of time. [55] 55. Method according to claim 54, characterized by the fact that the data collected from the motion sensor is collected wirelessly. [56] 56. Method, according to claim 54, characterized by the fact that the data collected from the motion sensor is collected from a wire connected to the motion sensor. [57] 57. Method according to any one of claims 54 to 56, characterized by the fact that the data collected from the motion sensor is collected by at least one among a phone, a computer, a tablet and a portable memory. [58] 58. Method according to any one of claims 44 to 57, characterized by the fact that it further comprises: registering in the three-dimensional space the motion sensor in a virtual three-dimensional model of the body part; and, correlating data collected from the motion sensor as a function of the body part position to create a virtual dynamic model of the body part that is repositionable to reflect the actual positions of the body part when repositioned within a range of motion . [59] 59. Method according to any one of claims 44 to 57, characterized by the fact that it further comprises: building a virtual three-dimensional model of the motion sensor using the plurality of radiographic images. [60] 60. Method, according to claim 58, characterized by the fact that it further comprises: building a virtual three-dimensional model of the motion sensor using the plurality of radiographic images. [61] 61. The method of claim 60, characterized by the fact that the virtual three-dimensional model of the motion sensor is integrated with the virtual three-dimensional model of the body part to create a combined three-dimensional virtual model. [62] 62. Method according to claim 61, characterized by the fact that it further comprises: correlating data collected from the motion sensor as a function of the position of the body part to provide dynamic movement to the combined three-dimensional virtual model. [63] 63. Method according to claim 62, characterized by the fact that collecting motion data includes recording at least one of changes in the position and rotation of the motion sensor as a function of time. [64] 64. Method according to claim 62, characterized by the fact that the collection of motion data includes recording changes in the acceleration of the motion sensor as a function of time. [65] 65. Method, according to claim 62, characterized by the fact that the movement data collected are hourly.
类似技术:
公开号 | 公开日 | 专利标题 BR112019025107A2|2020-06-23|HIP SURGICAL NAVIGATION USING FLUOROSCOPY AND TRACKING SENSORS US20210015559A1|2021-01-21|Ultra-wideband positioning for wireless ultrasound tracking and communication AU2019213291B2|2021-07-22|Bone reconstruction and orthopedic implants US9408617B2|2016-08-09|Method for orienting an acetabular cup and instruments for use therewith ES2718114T3|2019-06-27|Orthopedic and bone reconstruction implants CN101809620B|2013-03-13|Processing image of motion artefacts by landmark tracking CN107106239A|2017-08-29|Surgery is planned and method US20210007806A1|2021-01-14|A method for obtaining 3-d deformity correction for bones Cerveri et al.2017|2D/3D reconstruction of the distal femur using statistical shape models addressing personalized surgical instruments in knee arthroplasty: a feasibility analysis US20220058797A1|2022-02-24|Artificial-intelligence-based determination of relative positions of objects in medical images US20210012492A1|2021-01-14|Systems and methods for obtaining 3-d images from x-ray information for deformed elongate bones Valenti et al.2016|Fluoroscopy-based tracking of femoral kinematics with statistical shape models WO2019180747A1|2019-09-26|Systems and methods for obtaining patient specific instrument designs Hossain et al.2013|A 3D-2D image registration algorithm for kinematic analysis of the knee after total knee arthroplasty | ES2718191T3|2019-06-28|Anastomotic device for joining lumens or viscera together
同族专利:
公开号 | 公开日 JP2021154168A|2021-10-07| US20190133693A1|2019-05-09| EP3618715A4|2021-02-17| EP3618715A1|2020-03-11| CN110944594A|2020-03-31| WO2018236936A1|2018-12-27| JP2020527380A|2020-09-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5951475A|1997-09-25|1999-09-14|International Business Machines Corporation|Methods and apparatus for registering CT-scan data to multiple fluoroscopic images| WO2002022014A1|2000-09-14|2002-03-21|The Board Of Trustees Of The Leland Stanford Junior University|Assessing the condition of a joint and devising treatment| US6990368B2|2002-04-04|2006-01-24|Surgical Navigation Technologies, Inc.|Method and apparatus for virtual digital subtraction angiography| FR2841118B1|2002-06-20|2012-03-23|Perception Raisonnement Action En Medecine|DETERMINING THE POSITION OF A RADIOGRAPH OR RADIOSCOPY APPARATUS| EP1638461B1|2003-06-16|2012-08-22|Philips Intellectual Property & Standards GmbH|Imaging system for interventional radiology| AU2005206203A1|2004-01-22|2005-08-04|Smith & Nephew, Inc.|Methods, systems, and apparatuses for providing patient-mounted surgical navigational sensors| US8007448B2|2004-10-08|2011-08-30|Stryker Leibinger Gmbh & Co. Kg.|System and method for performing arthroplasty of a joint and tracking a plumb line plane| AT537774T|2005-08-25|2012-01-15|Koninkl Philips Electronics Nv|SYSTEM FOR THE ELECTROPHYSIOLOGICAL RECOVERY OF A SUPPORT FOR THE REMOVAL OF LINES AND RINGS| US20080221487A1|2007-03-07|2008-09-11|Motek Bv|Method for real time interactive visualization of muscle forces and joint torques in the human body| WO2008142629A2|2007-05-24|2008-11-27|Koninklijke Philips Electronics N.V.|Multifunctional marker| EP2180920A1|2007-07-20|2010-05-05|Össur HF|Prosthetic or orthopedic device having feedback| US10299753B2|2007-11-29|2019-05-28|Biosense Webster, Inc.|Flashlight view of an anatomical structure| EP2146321B1|2008-07-14|2012-12-12|Cefla S.C.|Dynamic error correction in radiographic imaging| US8444564B2|2009-02-02|2013-05-21|Jointvue, Llc|Noninvasive diagnostic system| WO2010122145A1|2009-04-25|2010-10-28|Siemens Aktiengesellschaft|A method and a system for assessing the relative pose of an implant and a bone of a creature| US9554868B2|2009-08-07|2017-01-31|DePuy Synthes Products, Inc.|Method and apparatus for reducing malalignment of fractured bone fragments| US8411927B2|2009-10-29|2013-04-02|Siemens Aktiengesellschaft|Marker detection in X-ray images| WO2012127353A1|2011-03-18|2012-09-27|Koninklijke Philips Electronics N.V.|Multi-leg geometry reference tracker for multi-modality data fusion| EP2765919A4|2011-10-14|2015-08-05|Jointvue Llc|Real-time 3-d ultrasound reconstruction of knee and its complications for patient specific implants and 3-d joint injections| US20130244211A1|2012-03-15|2013-09-19|The Board Of Trustees Of The Leland Stanford Junior University|Systems and methods for measuring, analyzing, and providing feedback for movement in multidimensional space| US9538940B2|2012-05-03|2017-01-10|University of Pittsburgh—of the Commonwealth System of Higher Education|Intelligent algorithms for tracking three-dimensional skeletal movement from radiographic image sequences| WO2013185811A1|2012-06-13|2013-12-19|Brainlab Ag|Determining a range of motion of an artificial knee joint| DE102012210758A1|2012-06-25|2014-01-02|Sirona Dental Systems Gmbh|Method for checking tooth positions| ES2643061T3|2012-07-23|2017-11-21|Orthosoft, Inc.|Patient-specific instrumentation for implant correction surgery| JP6071302B2|2012-07-26|2017-02-01|オリンパス株式会社|Calibration apparatus and program| US9381089B2|2013-03-14|2016-07-05|Active Implants LLC|Meniscus prosthetic devices with anti-migration or radiopaque features| WO2016007936A2|2014-07-10|2016-01-14|Mahfouz Mohamed R|Bone reconstruction and orthopedic implants| US10146308B2|2014-10-14|2018-12-04|Immersion Corporation|Systems and methods for impedance coupling for haptic devices| US10258256B2|2014-12-09|2019-04-16|TechMah Medical|Bone reconstruction and orthopedic implants| US10413377B2|2015-03-19|2019-09-17|Medtronic Navigation, Inc.|Flexible skin based patient tracker for optical navigation| WO2016154230A1|2015-03-23|2016-09-29|Consensus Orthopedics, Inc.|Systems and methods for monitoring an orthopedic implant and rehabilitation| WO2016183812A1|2015-05-20|2016-11-24|北京诺亦腾科技有限公司|Mixed motion capturing system and method| US10575759B2|2015-07-01|2020-03-03|BaziFIT, Inc.|Method and system for monitoring and analyzing position, motion, and equilibrium of body parts| US9924921B1|2017-06-20|2018-03-27|Qualcomm Incorporated|System for mapping joint performance|US9532845B1|2015-08-11|2017-01-03|ITKR Software LLC|Methods for facilitating individualized kinematically aligned total knee replacements and devices thereof| WO2019045144A1|2017-08-31|2019-03-07|레벨소프트|Medical image processing apparatus and medical image processing method which are for medical navigation device| US10517681B2|2018-02-27|2019-12-31|NavLab, Inc.|Artificial intelligence guidance system for robotic surgery| US11227385B2|2018-08-08|2022-01-18|Loyola University Chicago|Methods of classifying and/or determining orientations of objects using two-dimensional images| US20200175756A1|2018-12-03|2020-06-04|Intuitive Research And Technology Corporation|Two-dimensional to three-dimensional spatial indexing| US20200194108A1|2018-12-13|2020-06-18|Rutgers, The State University Of New Jersey|Object detection in medical image| WO2020174284A1|2019-02-27|2020-09-03|Body Vision Medical Ltd.|Methods and systems for movement compensation during three-dimensional reconstruction of anatomical structures| US11069125B2|2019-04-09|2021-07-20|Intuitive Research And Technology Corporation|Geometry buffer slice tool| DE102019109789A1|2019-04-12|2020-10-15|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Method and computer program for time-resolved calculation of a deformation of a body| WO2020227832A1|2019-05-15|2020-11-19|Intellijoint Surgical Inc.|Systems and methods for computer assisted femoral surgery| WO2021046455A1|2019-09-05|2021-03-11|The Johns Hopkins University|Fast and automatic pose estimation using intraoperatively located fiducials and single-view fluoroscopy| WO2021165152A1|2020-02-18|2021-08-26|Koninklijke Philips N.V.|Medical imaging systems, devices, and methods for visualizing a deployment state of an intracorporeal therapeutic device| WO2021174293A1|2020-03-04|2021-09-10|360 Knee Systems Pty Ltd|Image processing for intraoperative guidance systems| WO2021174294A1|2020-03-04|2021-09-10|360 Knee Systems Pty Ltd|"intraoperative localisation systems and methods"| WO2021174292A1|2020-03-04|2021-09-10|360 Knee Systems Pty Ltd|"intraoperative guidance systems and methods"| JP2021137500A|2020-03-09|2021-09-16|学校法人東京医科大学|Bone part surgery support device, support method, program, and recording medium| EP3888563A1|2020-03-31|2021-10-06|Ecential Robotics|Method and system for generating images for a fluoroscopy-based navigation system| WO2022031770A1|2020-08-04|2022-02-10|Stryker Corporation|Systems and methods for visualizing a trajectory with a surgical instrument| CN112091983B|2020-11-17|2021-02-12|南京佗道医疗科技有限公司|Precision checking device, system and method|
法律状态:
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762521582P| true| 2017-06-19|2017-06-19| US62/521,582|2017-06-19| US201762575905P| true| 2017-10-23|2017-10-23| US62/575,905|2017-10-23| US201862617383P| true| 2018-01-15|2018-01-15| US62/617,383|2018-01-15| PCT/US2018/038371|WO2018236936A1|2017-06-19|2018-06-19|Surgical navigation of the hip using fluoroscopy and tracking sensors| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|